Nov 4 23:54:42.095999 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:54:42.096052 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:42.096072 kernel: BIOS-provided physical RAM map: Nov 4 23:54:42.096087 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 4 23:54:42.096100 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 4 23:54:42.096114 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 4 23:54:42.096132 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 4 23:54:42.096151 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 4 23:54:42.096167 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Nov 4 23:54:42.096182 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Nov 4 23:54:42.096198 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Nov 4 23:54:42.096212 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Nov 4 23:54:42.096226 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 4 23:54:42.096242 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 4 23:54:42.096270 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 4 23:54:42.096290 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 4 23:54:42.096309 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 4 23:54:42.096349 kernel: NX (Execute Disable) protection: active Nov 4 23:54:42.096367 kernel: APIC: Static calls initialized Nov 4 23:54:42.096384 kernel: efi: EFI v2.7 by EDK II Nov 4 23:54:42.096404 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Nov 4 23:54:42.096422 kernel: random: crng init done Nov 4 23:54:42.096447 kernel: secureboot: Secure boot disabled Nov 4 23:54:42.096464 kernel: SMBIOS 2.4 present. Nov 4 23:54:42.096483 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 4 23:54:42.096502 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:54:42.096519 kernel: Hypervisor detected: KVM Nov 4 23:54:42.096538 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 4 23:54:42.096555 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:54:42.096574 kernel: kvm-clock: using sched offset of 11002267410 cycles Nov 4 23:54:42.096593 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:54:42.096612 kernel: tsc: Detected 2299.998 MHz processor Nov 4 23:54:42.096648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:54:42.096668 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:54:42.096685 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 4 23:54:42.096702 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 4 23:54:42.096718 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:54:42.096736 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 4 23:54:42.096753 kernel: Using GB pages for direct mapping Nov 4 23:54:42.096775 kernel: ACPI: Early table checksum verification disabled Nov 4 23:54:42.096799 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 4 23:54:42.096816 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 4 23:54:42.096833 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 4 23:54:42.096849 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 4 23:54:42.096866 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 4 23:54:42.096888 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 4 23:54:42.096907 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 4 23:54:42.096925 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 4 23:54:42.096940 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 4 23:54:42.096957 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 4 23:54:42.096974 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 4 23:54:42.096997 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 4 23:54:42.097013 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 4 23:54:42.097101 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 4 23:54:42.097119 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 4 23:54:42.097138 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 4 23:54:42.097157 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 4 23:54:42.097174 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 4 23:54:42.097197 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 4 23:54:42.097215 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 4 23:54:42.097234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 23:54:42.097254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 4 23:54:42.097273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 4 23:54:42.097292 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Nov 4 23:54:42.097312 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Nov 4 23:54:42.097386 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Nov 4 23:54:42.097406 kernel: Zone ranges: Nov 4 23:54:42.097425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:54:42.097444 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 4 23:54:42.097463 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 4 23:54:42.097482 kernel: Device empty Nov 4 23:54:42.097502 kernel: Movable zone start for each node Nov 4 23:54:42.097525 kernel: Early memory node ranges Nov 4 23:54:42.097545 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 4 23:54:42.097564 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 4 23:54:42.097583 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Nov 4 23:54:42.097602 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Nov 4 23:54:42.097630 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 4 23:54:42.097649 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 4 23:54:42.097669 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 4 23:54:42.097692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:54:42.097711 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 4 23:54:42.097730 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 4 23:54:42.097750 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Nov 4 23:54:42.097770 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 4 23:54:42.097789 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 4 23:54:42.097808 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 4 23:54:42.097831 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:54:42.097850 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:54:42.097869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:54:42.097888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:54:42.097907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:54:42.097927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:54:42.097946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:54:42.097969 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:54:42.097987 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:54:42.098006 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:54:42.098023 kernel: CPU topo: Max. threads per core: 2 Nov 4 23:54:42.098043 kernel: CPU topo: Num. cores per package: 1 Nov 4 23:54:42.098062 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:54:42.098081 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:54:42.098100 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 4 23:54:42.098123 kernel: Booting paravirtualized kernel on KVM Nov 4 23:54:42.098143 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:54:42.098162 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:54:42.098181 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:54:42.098200 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:54:42.098219 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:54:42.098238 kernel: kvm-guest: PV spinlocks enabled Nov 4 23:54:42.098260 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:54:42.098282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:42.098301 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 4 23:54:42.098321 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:54:42.098361 kernel: Fallback order for Node 0: 0 Nov 4 23:54:42.098380 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Nov 4 23:54:42.098400 kernel: Policy zone: Normal Nov 4 23:54:42.098424 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:54:42.098443 kernel: software IO TLB: area num 2. Nov 4 23:54:42.098476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:54:42.098499 kernel: Kernel/User page tables isolation: enabled Nov 4 23:54:42.098519 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:54:42.098539 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:54:42.098560 kernel: Dynamic Preempt: voluntary Nov 4 23:54:42.098580 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:54:42.098601 kernel: rcu: RCU event tracing is enabled. Nov 4 23:54:42.098633 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:54:42.098654 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:54:42.098675 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:54:42.098695 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:54:42.098718 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:54:42.098739 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:54:42.098760 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:42.098780 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:42.098801 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:54:42.098821 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 23:54:42.098842 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:54:42.098865 kernel: Console: colour dummy device 80x25 Nov 4 23:54:42.098886 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:54:42.098906 kernel: ACPI: Core revision 20240827 Nov 4 23:54:42.098926 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:54:42.098945 kernel: x2apic enabled Nov 4 23:54:42.098966 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:54:42.098986 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 4 23:54:42.099007 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 4 23:54:42.099032 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 4 23:54:42.099052 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 4 23:54:42.099072 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 4 23:54:42.099093 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:54:42.099113 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 4 23:54:42.099134 kernel: Spectre V2 : Mitigation: IBRS Nov 4 23:54:42.099154 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:54:42.099178 kernel: RETBleed: Mitigation: IBRS Nov 4 23:54:42.099198 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:54:42.099218 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 4 23:54:42.099238 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:54:42.099259 kernel: MDS: Mitigation: Clear CPU buffers Nov 4 23:54:42.099279 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 23:54:42.099299 kernel: active return thunk: its_return_thunk Nov 4 23:54:42.099322 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:54:42.099859 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:54:42.099880 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:54:42.099902 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:54:42.099922 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:54:42.099943 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 4 23:54:42.099960 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:54:42.099987 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:54:42.100008 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:54:42.100029 kernel: landlock: Up and running. Nov 4 23:54:42.100049 kernel: SELinux: Initializing. Nov 4 23:54:42.100070 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:54:42.100091 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:54:42.100112 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 4 23:54:42.100137 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 4 23:54:42.100158 kernel: signal: max sigframe size: 1776 Nov 4 23:54:42.100178 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:54:42.100199 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:54:42.100221 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:54:42.100241 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:54:42.100262 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:54:42.100286 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:54:42.100306 kernel: .... node #0, CPUs: #1 Nov 4 23:54:42.100341 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 4 23:54:42.100377 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 4 23:54:42.100397 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:54:42.100418 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 4 23:54:42.100438 kernel: Memory: 7586532K/7860544K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 268180K reserved, 0K cma-reserved) Nov 4 23:54:42.100464 kernel: devtmpfs: initialized Nov 4 23:54:42.100484 kernel: x86/mm: Memory block size: 128MB Nov 4 23:54:42.100505 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 4 23:54:42.100526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:54:42.100546 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:54:42.100566 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:54:42.100591 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:54:42.100610 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:54:42.100673 kernel: audit: type=2000 audit(1762300479.221:1): state=initialized audit_enabled=0 res=1 Nov 4 23:54:42.100692 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:54:42.100710 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:54:42.100729 kernel: cpuidle: using governor menu Nov 4 23:54:42.100748 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:54:42.100767 kernel: dca service started, version 1.12.1 Nov 4 23:54:42.100790 kernel: PCI: Using configuration type 1 for base access Nov 4 23:54:42.100809 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:54:42.100829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:54:42.100846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:54:42.100866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:54:42.100885 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:54:42.100903 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:54:42.100927 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:54:42.100945 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:54:42.100966 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 4 23:54:42.100986 kernel: ACPI: Interpreter enabled Nov 4 23:54:42.101006 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 23:54:42.101026 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:54:42.101045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:54:42.101071 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 4 23:54:42.101091 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 4 23:54:42.101112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:54:42.101483 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:54:42.101769 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 4 23:54:42.102034 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 4 23:54:42.102065 kernel: PCI host bridge to bus 0000:00 Nov 4 23:54:42.102320 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:54:42.102601 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:54:42.102864 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:54:42.103116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 4 23:54:42.103409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:54:42.103723 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:54:42.104036 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:54:42.104343 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 23:54:42.104640 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 4 23:54:42.105246 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Nov 4 23:54:42.105546 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 4 23:54:42.105821 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Nov 4 23:54:42.106093 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:54:42.107642 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Nov 4 23:54:42.107937 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Nov 4 23:54:42.108226 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:54:42.108527 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Nov 4 23:54:42.110558 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Nov 4 23:54:42.110592 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:54:42.110623 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:54:42.110645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:54:42.110673 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:54:42.110694 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 4 23:54:42.110715 kernel: iommu: Default domain type: Translated Nov 4 23:54:42.110736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:54:42.110757 kernel: efivars: Registered efivars operations Nov 4 23:54:42.110773 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:54:42.110792 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:54:42.110819 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 4 23:54:42.110841 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 4 23:54:42.110861 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Nov 4 23:54:42.110879 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 4 23:54:42.110900 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 4 23:54:42.110920 kernel: vgaarb: loaded Nov 4 23:54:42.110941 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:54:42.110968 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:54:42.110989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:54:42.111011 kernel: pnp: PnP ACPI init Nov 4 23:54:42.111032 kernel: pnp: PnP ACPI: found 7 devices Nov 4 23:54:42.111053 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:54:42.111075 kernel: NET: Registered PF_INET protocol family Nov 4 23:54:42.111097 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:54:42.111118 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 4 23:54:42.111145 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:54:42.111166 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:54:42.111188 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 4 23:54:42.111209 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 4 23:54:42.111230 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:54:42.111249 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 4 23:54:42.111269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:54:42.111293 kernel: NET: Registered PF_XDP protocol family Nov 4 23:54:42.111590 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:54:42.111841 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:54:42.112081 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:54:42.112316 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 4 23:54:42.114654 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 23:54:42.114693 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:54:42.114714 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 4 23:54:42.114736 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 4 23:54:42.114757 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 23:54:42.114778 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 4 23:54:42.114799 kernel: clocksource: Switched to clocksource tsc Nov 4 23:54:42.114823 kernel: Initialise system trusted keyrings Nov 4 23:54:42.114844 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 4 23:54:42.114865 kernel: Key type asymmetric registered Nov 4 23:54:42.114885 kernel: Asymmetric key parser 'x509' registered Nov 4 23:54:42.114906 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:54:42.114927 kernel: io scheduler mq-deadline registered Nov 4 23:54:42.114948 kernel: io scheduler kyber registered Nov 4 23:54:42.114972 kernel: io scheduler bfq registered Nov 4 23:54:42.114993 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:54:42.115014 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 4 23:54:42.115282 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 4 23:54:42.115309 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 4 23:54:42.117652 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 4 23:54:42.117685 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 4 23:54:42.117960 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 4 23:54:42.117986 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:54:42.118008 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:54:42.118028 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 4 23:54:42.118049 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 4 23:54:42.118070 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 4 23:54:42.118357 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 4 23:54:42.118391 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:54:42.118412 kernel: i8042: Warning: Keylock active Nov 4 23:54:42.118432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:54:42.118452 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:54:42.118731 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 4 23:54:42.118980 kernel: rtc_cmos 00:00: registered as rtc0 Nov 4 23:54:42.119233 kernel: rtc_cmos 00:00: setting system clock to 2025-11-04T23:54:40 UTC (1762300480) Nov 4 23:54:42.119509 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 4 23:54:42.119534 kernel: intel_pstate: CPU model not supported Nov 4 23:54:42.119555 kernel: pstore: Using crash dump compression: deflate Nov 4 23:54:42.119575 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 23:54:42.119596 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:54:42.119624 kernel: Segment Routing with IPv6 Nov 4 23:54:42.119651 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:54:42.119672 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:54:42.119692 kernel: Key type dns_resolver registered Nov 4 23:54:42.119713 kernel: IPI shorthand broadcast: enabled Nov 4 23:54:42.119734 kernel: sched_clock: Marking stable (1271004856, 147095844)->(1451510453, -33409753) Nov 4 23:54:42.119754 kernel: registered taskstats version 1 Nov 4 23:54:42.119775 kernel: Loading compiled-in X.509 certificates Nov 4 23:54:42.119799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:54:42.119820 kernel: Demotion targets for Node 0: null Nov 4 23:54:42.119840 kernel: Key type .fscrypt registered Nov 4 23:54:42.119859 kernel: Key type fscrypt-provisioning registered Nov 4 23:54:42.119879 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:54:42.119900 kernel: ima: Can not allocate sha384 (reason: -2) Nov 4 23:54:42.119921 kernel: ima: No architecture policies found Nov 4 23:54:42.119944 kernel: clk: Disabling unused clocks Nov 4 23:54:42.119965 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:54:42.119985 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:54:42.120005 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:54:42.120026 kernel: Run /init as init process Nov 4 23:54:42.120046 kernel: with arguments: Nov 4 23:54:42.120066 kernel: /init Nov 4 23:54:42.120086 kernel: with environment: Nov 4 23:54:42.120110 kernel: HOME=/ Nov 4 23:54:42.120130 kernel: TERM=linux Nov 4 23:54:42.120151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 4 23:54:42.120171 kernel: SCSI subsystem initialized Nov 4 23:54:42.120464 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Nov 4 23:54:42.120771 kernel: scsi host0: Virtio SCSI HBA Nov 4 23:54:42.121072 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 4 23:54:42.123426 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 4 23:54:42.123752 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 4 23:54:42.124042 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 4 23:54:42.124317 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 4 23:54:42.124619 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 4 23:54:42.124649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:54:42.124690 kernel: GPT:25804799 != 33554431 Nov 4 23:54:42.124715 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:54:42.124735 kernel: GPT:25804799 != 33554431 Nov 4 23:54:42.124755 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:54:42.124779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 23:54:42.125046 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 4 23:54:42.125071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:54:42.125090 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:54:42.125110 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:54:42.125132 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:54:42.125154 kernel: raid6: avx2x4 gen() 18077 MB/s Nov 4 23:54:42.125179 kernel: raid6: avx2x2 gen() 17881 MB/s Nov 4 23:54:42.125198 kernel: raid6: avx2x1 gen() 13488 MB/s Nov 4 23:54:42.125219 kernel: raid6: using algorithm avx2x4 gen() 18077 MB/s Nov 4 23:54:42.125240 kernel: raid6: .... xor() 6708 MB/s, rmw enabled Nov 4 23:54:42.125259 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:54:42.125277 kernel: xor: automatically using best checksumming function avx Nov 4 23:54:42.125294 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:54:42.125312 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (155) Nov 4 23:54:42.127384 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:54:42.127413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:42.127436 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 23:54:42.127458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:54:42.127480 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:54:42.127509 kernel: loop: module loaded Nov 4 23:54:42.127531 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:54:42.127557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:54:42.127582 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:54:42.127610 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:54:42.127640 systemd[1]: Detected virtualization google. Nov 4 23:54:42.127662 systemd[1]: Detected architecture x86-64. Nov 4 23:54:42.127687 systemd[1]: Running in initrd. Nov 4 23:54:42.127709 systemd[1]: No hostname configured, using default hostname. Nov 4 23:54:42.127732 systemd[1]: Hostname set to . Nov 4 23:54:42.127754 systemd[1]: Initializing machine ID from random generator. Nov 4 23:54:42.127776 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:54:42.127798 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:54:42.127821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:54:42.127847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:54:42.127871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:54:42.127894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:54:42.127918 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:54:42.127942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:54:42.127969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:54:42.127992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:54:42.128014 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:54:42.128037 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:54:42.128060 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:54:42.128089 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:54:42.128112 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:54:42.128134 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:54:42.128157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:54:42.128180 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:54:42.128202 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:54:42.128224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:54:42.128250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:54:42.128272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:54:42.128295 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:54:42.128317 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:54:42.128353 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:54:42.128375 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:54:42.128402 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:54:42.128425 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:54:42.128447 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:54:42.128469 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:54:42.128491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:54:42.128510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:42.128538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:54:42.128558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:54:42.128587 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:54:42.128624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:54:42.128703 systemd-journald[290]: Collecting audit messages is disabled. Nov 4 23:54:42.128752 systemd-journald[290]: Journal started Nov 4 23:54:42.128798 systemd-journald[290]: Runtime Journal (/run/log/journal/77deeda2f6a2459581754d35062cbf85) is 8M, max 148.6M, 140.6M free. Nov 4 23:54:42.132362 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:54:42.138652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:54:42.147353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:54:42.145621 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:54:42.157566 kernel: Bridge firewalling registered Nov 4 23:54:42.157195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:54:42.158987 systemd-modules-load[292]: Inserted module 'br_netfilter' Nov 4 23:54:42.162034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:54:42.168659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:42.179683 systemd-tmpfiles[302]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:54:42.193572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:42.201456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:54:42.206262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:54:42.213948 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:54:42.217949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:42.224570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:54:42.259112 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:54:42.268127 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:54:42.310080 systemd-resolved[320]: Positive Trust Anchors: Nov 4 23:54:42.310697 systemd-resolved[320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:54:42.310707 systemd-resolved[320]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:54:42.333694 dracut-cmdline[332]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:54:42.310776 systemd-resolved[320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:54:42.371209 systemd-resolved[320]: Defaulting to hostname 'linux'. Nov 4 23:54:42.373971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:54:42.382598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:54:42.456383 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:54:42.477466 kernel: iscsi: registered transport (tcp) Nov 4 23:54:42.507489 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:54:42.507580 kernel: QLogic iSCSI HBA Driver Nov 4 23:54:42.541991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:54:42.562459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:54:42.565197 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:54:42.630245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:54:42.633635 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:54:42.641673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:54:42.683474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:54:42.693563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:54:42.738413 systemd-udevd[565]: Using default interface naming scheme 'v257'. Nov 4 23:54:42.760962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:54:42.767729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:54:42.810315 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Nov 4 23:54:42.820934 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:54:42.832092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:54:42.858207 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:54:42.869582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:54:42.910526 systemd-networkd[690]: lo: Link UP Nov 4 23:54:42.911098 systemd-networkd[690]: lo: Gained carrier Nov 4 23:54:42.912893 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:54:42.918593 systemd[1]: Reached target network.target - Network. Nov 4 23:54:42.990701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:54:43.000841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:54:43.193904 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 4 23:54:43.273420 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:54:43.292846 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 4 23:54:43.297358 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 4 23:54:43.324683 systemd-networkd[690]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:43.324875 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:54:43.326711 systemd-networkd[690]: eth0: Link UP Nov 4 23:54:43.326982 systemd-networkd[690]: eth0: Gained carrier Nov 4 23:54:43.327001 systemd-networkd[690]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:54:43.336398 systemd-networkd[690]: eth0: Overlong DHCP hostname received, shortened from 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8.c.flatcar-212911.internal' to 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:54:43.347821 kernel: AES CTR mode by8 optimization enabled Nov 4 23:54:43.336418 systemd-networkd[690]: eth0: DHCPv4 address 10.128.0.112/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 4 23:54:43.385827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 4 23:54:43.405791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 4 23:54:43.409894 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:54:43.416003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:54:43.416405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:43.422681 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:43.433722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:54:43.448518 disk-uuid[811]: Primary Header is updated. Nov 4 23:54:43.448518 disk-uuid[811]: Secondary Entries is updated. Nov 4 23:54:43.448518 disk-uuid[811]: Secondary Header is updated. Nov 4 23:54:43.458449 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:54:43.463471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:54:43.463723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:54:43.463779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:54:43.467686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:54:43.515341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:43.579536 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:54:44.570104 disk-uuid[816]: Warning: The kernel is still using the old partition table. Nov 4 23:54:44.570104 disk-uuid[816]: The new table will be used at the next reboot or after you Nov 4 23:54:44.570104 disk-uuid[816]: run partprobe(8) or kpartx(8) Nov 4 23:54:44.570104 disk-uuid[816]: The operation has completed successfully. Nov 4 23:54:44.581816 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:54:44.581976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:54:44.587076 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:54:44.663371 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (842) Nov 4 23:54:44.681427 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:44.681521 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:44.700244 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:54:44.700365 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:54:44.700410 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:54:44.722378 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:44.723810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:54:44.740953 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:54:45.005852 ignition[861]: Ignition 2.22.0 Nov 4 23:54:45.005872 ignition[861]: Stage: fetch-offline Nov 4 23:54:45.008786 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:54:45.005938 ignition[861]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:45.013169 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:54:45.005957 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:45.006127 ignition[861]: parsed url from cmdline: "" Nov 4 23:54:45.006134 ignition[861]: no config URL provided Nov 4 23:54:45.006144 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:45.072639 unknown[868]: fetched base config from "system" Nov 4 23:54:45.006163 ignition[861]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:45.072659 unknown[868]: fetched base config from "system" Nov 4 23:54:45.006174 ignition[861]: failed to fetch config: resource requires networking Nov 4 23:54:45.072670 unknown[868]: fetched user config from "gcp" Nov 4 23:54:45.006792 ignition[861]: Ignition finished successfully Nov 4 23:54:45.075586 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:54:45.060075 ignition[868]: Ignition 2.22.0 Nov 4 23:54:45.096514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:54:45.060084 ignition[868]: Stage: fetch Nov 4 23:54:45.152847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:54:45.060240 ignition[868]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:45.166524 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:54:45.060251 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:45.203493 systemd-networkd[690]: eth0: Gained IPv6LL Nov 4 23:54:45.060368 ignition[868]: parsed url from cmdline: "" Nov 4 23:54:45.227570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:54:45.060375 ignition[868]: no config URL provided Nov 4 23:54:45.242205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:54:45.060382 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:54:45.259517 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:54:45.060391 ignition[868]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:54:45.275574 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:54:45.060429 ignition[868]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 4 23:54:45.292550 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:54:45.063138 ignition[868]: GET result: OK Nov 4 23:54:45.309545 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:54:45.063272 ignition[868]: parsing config with SHA512: 4e9772ee4de2ee2a5cdd4736c292bd3ee2a01a69feb11eca743c92109c4b72340c70c09853a66ab28e68b0fcb025398f0d2243bd01fc140c62547d8f251b45ea Nov 4 23:54:45.327999 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:54:45.073288 ignition[868]: fetch: fetch complete Nov 4 23:54:45.073297 ignition[868]: fetch: fetch passed Nov 4 23:54:45.073407 ignition[868]: Ignition finished successfully Nov 4 23:54:45.150120 ignition[874]: Ignition 2.22.0 Nov 4 23:54:45.150129 ignition[874]: Stage: kargs Nov 4 23:54:45.150283 ignition[874]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:45.150293 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:45.151193 ignition[874]: kargs: kargs passed Nov 4 23:54:45.151280 ignition[874]: Ignition finished successfully Nov 4 23:54:45.224725 ignition[881]: Ignition 2.22.0 Nov 4 23:54:45.224733 ignition[881]: Stage: disks Nov 4 23:54:45.224910 ignition[881]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:45.224930 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:45.225839 ignition[881]: disks: disks passed Nov 4 23:54:45.225895 ignition[881]: Ignition finished successfully Nov 4 23:54:45.407549 systemd-fsck[889]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 23:54:45.479032 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:54:45.480512 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:54:45.699378 kernel: EXT4-fs (sda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:54:45.700495 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:54:45.709163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:54:45.719549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:45.733510 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:54:45.753155 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:54:45.802687 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (897) Nov 4 23:54:45.802750 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:45.802768 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:45.753240 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:54:45.848574 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:54:45.848623 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:54:45.848666 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:54:45.753290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:54:45.833668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:45.855845 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:54:45.874553 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:54:46.017216 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:54:46.027121 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:54:46.037482 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:54:46.046542 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:54:46.199890 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:54:46.201891 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:54:46.218828 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:54:46.258113 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:54:46.274302 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:46.296606 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:54:46.316968 ignition[1010]: INFO : Ignition 2.22.0 Nov 4 23:54:46.316968 ignition[1010]: INFO : Stage: mount Nov 4 23:54:46.338506 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:46.338506 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:46.338506 ignition[1010]: INFO : mount: mount passed Nov 4 23:54:46.338506 ignition[1010]: INFO : Ignition finished successfully Nov 4 23:54:46.320202 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:54:46.334033 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:54:46.396748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:54:46.445436 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1022) Nov 4 23:54:46.463321 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:54:46.463432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:54:46.480097 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:54:46.480197 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:54:46.480224 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:54:46.488712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:54:46.535903 ignition[1039]: INFO : Ignition 2.22.0 Nov 4 23:54:46.535903 ignition[1039]: INFO : Stage: files Nov 4 23:54:46.548535 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:54:46.548535 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:54:46.548535 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:54:46.548535 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:54:46.548535 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:54:46.548535 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:54:46.548535 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:54:46.548535 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:54:46.548535 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 23:54:46.548535 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 4 23:54:46.545602 unknown[1039]: wrote ssh authorized keys file for user: core Nov 4 23:54:51.965089 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:59.251291 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:54:59.380469 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 4 23:54:59.672041 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:55:00.068124 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:55:00.068124 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:00.104681 ignition[1039]: INFO : files: files passed Nov 4 23:55:00.104681 ignition[1039]: INFO : Ignition finished successfully Nov 4 23:55:00.076799 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:55:00.087468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:55:00.106094 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:55:00.153942 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:55:00.306533 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.306533 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.154072 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:55:00.341677 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:00.193623 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:00.210827 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:55:00.251944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:55:00.339081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:55:00.339223 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:55:00.351788 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:55:00.373773 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:55:00.393204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:55:00.394721 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:55:00.465635 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:00.483676 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:55:00.514706 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:55:00.514982 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:00.532823 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:00.542942 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:55:00.577846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:55:00.578063 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:00.602882 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:55:00.612909 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:55:00.628945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:55:00.642912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:55:00.659944 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:55:00.678914 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:55:00.695896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:55:00.728728 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:55:00.729153 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:55:00.764753 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:55:00.780692 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:55:00.781054 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:55:00.781294 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:55:00.818634 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:00.819065 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:00.855644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:55:00.856026 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:00.883687 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:55:00.883947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:55:00.908751 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:55:00.909119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:00.927767 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:55:00.927969 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:55:00.948078 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:55:00.955678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:55:00.955901 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:00.996792 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:55:01.014871 ignition[1094]: INFO : Ignition 2.22.0 Nov 4 23:55:01.014871 ignition[1094]: INFO : Stage: umount Nov 4 23:55:01.034888 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:01.034888 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 4 23:55:01.034888 ignition[1094]: INFO : umount: umount passed Nov 4 23:55:01.034888 ignition[1094]: INFO : Ignition finished successfully Nov 4 23:55:01.021512 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:55:01.022661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:01.044755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:55:01.044971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:01.045370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:55:01.045568 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:55:01.097878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:55:01.099472 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:55:01.099595 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:55:01.112196 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:55:01.112359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:55:01.134871 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:55:01.135016 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:55:01.151908 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:55:01.151979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:55:01.167598 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:55:01.167691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:55:01.185586 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:55:01.185693 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:55:01.203734 systemd[1]: Stopped target network.target - Network. Nov 4 23:55:01.212702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:55:01.212796 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:55:01.226770 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:55:01.243694 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:55:01.247462 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:01.257681 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:55:01.282471 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:55:01.298552 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:55:01.298638 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:55:01.317582 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:55:01.317661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:55:01.335564 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:55:01.335679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:55:01.353589 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:55:01.353707 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:55:01.371562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:55:01.371669 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:55:01.388716 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:55:01.397749 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:55:01.424174 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:55:01.424323 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:55:01.441437 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:55:01.441589 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:55:01.452048 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:55:01.465810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:55:01.465886 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:01.483101 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:55:01.517471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:55:01.517607 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:55:01.535597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:55:01.535700 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:01.551603 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:55:01.551706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:01.567664 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:01.585106 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:55:01.943545 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Nov 4 23:55:01.585291 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:01.603750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:55:01.603892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:01.619727 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:55:01.619787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:01.644665 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:55:01.644761 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:55:01.669807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:55:01.669901 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:55:01.696731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:55:01.696839 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:55:01.725016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:55:01.740462 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:55:01.740592 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:01.740729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:55:01.740783 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:01.768582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:01.768683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:01.797557 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:55:01.797684 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:55:01.815210 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:55:01.815365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:55:01.833884 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:55:01.843067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:55:01.895075 systemd[1]: Switching root. Nov 4 23:55:02.172478 systemd-journald[290]: Journal stopped Nov 4 23:55:04.792410 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:55:04.792469 kernel: SELinux: policy capability open_perms=1 Nov 4 23:55:04.792501 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:55:04.792521 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:55:04.792541 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:55:04.792561 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:55:04.792584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:55:04.792610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:55:04.792632 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:55:04.792653 kernel: audit: type=1403 audit(1762300502.590:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:55:04.792679 systemd[1]: Successfully loaded SELinux policy in 116.828ms. Nov 4 23:55:04.792703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.524ms. Nov 4 23:55:04.792729 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:55:04.792757 systemd[1]: Detected virtualization google. Nov 4 23:55:04.792781 systemd[1]: Detected architecture x86-64. Nov 4 23:55:04.792804 systemd[1]: Detected first boot. Nov 4 23:55:04.792833 systemd[1]: Initializing machine ID from random generator. Nov 4 23:55:04.792856 zram_generator::config[1137]: No configuration found. Nov 4 23:55:04.793011 kernel: Guest personality initialized and is inactive Nov 4 23:55:04.793036 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:55:04.793057 kernel: Initialized host personality Nov 4 23:55:04.793079 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:55:04.793108 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:55:04.793131 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:55:04.793154 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:55:04.793186 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:04.793212 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:55:04.793236 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:55:04.793259 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:55:04.793289 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:55:04.793313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:55:04.793359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:55:04.793384 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:55:04.793407 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:55:04.793436 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:04.793461 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:04.793487 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:55:04.793509 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:55:04.793532 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:55:04.793556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:55:04.793585 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:55:04.793613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:04.793637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:04.793660 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:55:04.793684 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:55:04.793707 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:55:04.793730 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:55:04.793757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:04.793780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:55:04.793805 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:55:04.793829 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:55:04.793853 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:55:04.793878 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:55:04.793903 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:55:04.793932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:04.793958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:04.793984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:04.794017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:55:04.794042 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:55:04.794068 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:55:04.794093 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:55:04.794119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:04.794144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:55:04.794179 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:55:04.794208 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:55:04.794385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:55:04.794415 systemd[1]: Reached target machines.target - Containers. Nov 4 23:55:04.794438 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:55:04.794461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:04.794485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:55:04.794508 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:55:04.794537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:04.794561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:04.794586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:04.794611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:55:04.794636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:04.794662 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:55:04.794692 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:55:04.794718 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:55:04.794742 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:55:04.794769 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:55:04.794792 kernel: fuse: init (API version 7.41) Nov 4 23:55:04.794815 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:04.794838 kernel: ACPI: bus type drm_connector registered Nov 4 23:55:04.794864 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:55:04.794887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:55:04.794911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:55:04.794933 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:55:04.794999 systemd-journald[1225]: Collecting audit messages is disabled. Nov 4 23:55:04.795050 systemd-journald[1225]: Journal started Nov 4 23:55:04.795092 systemd-journald[1225]: Runtime Journal (/run/log/journal/88b078d803954b04ba6eb4616ec86656) is 8M, max 148.6M, 140.6M free. Nov 4 23:55:03.604728 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:55:03.624149 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 23:55:03.624948 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:55:04.809399 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:55:04.821512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:55:04.854366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:04.867053 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:55:04.877932 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:55:04.886778 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:55:04.897700 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:55:04.906679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:55:04.915700 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:55:04.924735 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:55:04.934054 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:55:04.945186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:04.955989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:55:04.956540 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:55:04.966965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:04.967275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:04.977936 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:04.978226 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:04.987835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:04.988125 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:04.998796 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:55:04.999079 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:55:05.009815 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:05.010105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:05.020891 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:05.032008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:05.043815 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:55:05.054979 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:55:05.066789 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:05.088907 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:55:05.099851 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:55:05.111999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:55:05.128499 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:55:05.137496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:55:05.137747 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:55:05.147724 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:55:05.157669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:05.159286 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:55:05.175379 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:55:05.185561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:05.187840 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:55:05.196536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:05.199531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:55:05.211502 systemd-journald[1225]: Time spent on flushing to /var/log/journal/88b078d803954b04ba6eb4616ec86656 is 52.364ms for 940 entries. Nov 4 23:55:05.211502 systemd-journald[1225]: System Journal (/var/log/journal/88b078d803954b04ba6eb4616ec86656) is 8M, max 588.1M, 580.1M free. Nov 4 23:55:05.297629 systemd-journald[1225]: Received client request to flush runtime journal. Nov 4 23:55:05.297729 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:55:05.219900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:55:05.236790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:55:05.248674 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:55:05.260889 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:55:05.280589 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:55:05.295606 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:55:05.309158 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:55:05.320394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:55:05.332852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:05.364417 kernel: loop2: detected capacity change from 0 to 50552 Nov 4 23:55:05.379524 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:55:05.389223 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:55:05.405613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:55:05.417671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:55:05.436907 kernel: loop3: detected capacity change from 0 to 110984 Nov 4 23:55:05.445210 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:55:05.518391 kernel: loop4: detected capacity change from 0 to 224512 Nov 4 23:55:05.510171 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 4 23:55:05.510208 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 4 23:55:05.534020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:05.544898 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:55:05.585160 kernel: loop5: detected capacity change from 0 to 128048 Nov 4 23:55:05.631392 kernel: loop6: detected capacity change from 0 to 50552 Nov 4 23:55:05.663391 kernel: loop7: detected capacity change from 0 to 110984 Nov 4 23:55:05.687016 systemd-resolved[1277]: Positive Trust Anchors: Nov 4 23:55:05.687044 systemd-resolved[1277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:55:05.687053 systemd-resolved[1277]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:55:05.687127 systemd-resolved[1277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:55:05.695993 systemd-resolved[1277]: Defaulting to hostname 'linux'. Nov 4 23:55:05.698178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:55:05.709359 kernel: loop1: detected capacity change from 0 to 224512 Nov 4 23:55:05.714687 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:05.736257 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-gce.raw'. Nov 4 23:55:05.743664 (sd-merge)[1290]: Merged extensions into '/usr'. Nov 4 23:55:05.751774 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:55:05.752001 systemd[1]: Reloading... Nov 4 23:55:05.911370 zram_generator::config[1318]: No configuration found. Nov 4 23:55:06.308734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:55:06.308985 systemd[1]: Reloading finished in 555 ms. Nov 4 23:55:06.333150 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:55:06.356629 systemd[1]: Starting ensure-sysext.service... Nov 4 23:55:06.365574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:55:06.400182 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:55:06.405311 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:55:06.405463 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:55:06.405946 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:55:06.406472 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:55:06.407744 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:55:06.408166 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Nov 4 23:55:06.408286 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Nov 4 23:55:06.414352 systemd[1]: Reload requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:55:06.414391 systemd[1]: Reloading... Nov 4 23:55:06.420788 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:06.420815 systemd-tmpfiles[1358]: Skipping /boot Nov 4 23:55:06.438203 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:06.438418 systemd-tmpfiles[1358]: Skipping /boot Nov 4 23:55:06.504363 zram_generator::config[1385]: No configuration found. Nov 4 23:55:06.764248 systemd[1]: Reloading finished in 348 ms. Nov 4 23:55:06.803713 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:06.824938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:06.839579 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:55:06.854354 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:55:06.868477 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:55:06.882715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:06.893925 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:55:06.912625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:06.912980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:06.916270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:06.928971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:06.940888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:06.949618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:06.949859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:06.950051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:06.969030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:06.970205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:06.971402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:06.971723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:06.972018 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:06.983821 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:55:07.010059 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:07.011428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:07.014930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:07.029705 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 4 23:55:07.037696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:07.038018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:07.038341 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:55:07.057363 augenrules[1460]: No rules Nov 4 23:55:07.049670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:07.056752 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:07.057114 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:07.068590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:07.068906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:07.079013 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:07.079365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:07.085098 systemd-udevd[1443]: Using default interface naming scheme 'v257'. Nov 4 23:55:07.097829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:07.098937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:07.110710 systemd[1]: Finished ensure-sysext.service. Nov 4 23:55:07.119063 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:55:07.130064 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:07.130377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:07.153921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:07.156173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:07.168887 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 4 23:55:07.179596 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 4 23:55:07.190702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:07.203431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:55:07.226478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:55:07.235592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:55:07.315212 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 4 23:55:07.333780 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:55:07.393309 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Nov 4 23:55:07.393414 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Nov 4 23:55:07.487092 systemd-networkd[1493]: lo: Link UP Nov 4 23:55:07.489924 systemd-networkd[1493]: lo: Gained carrier Nov 4 23:55:07.496425 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:55:07.507478 systemd[1]: Reached target network.target - Network. Nov 4 23:55:07.509649 systemd-networkd[1493]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:07.509658 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:07.513831 systemd-networkd[1493]: eth0: Link UP Nov 4 23:55:07.516893 systemd-networkd[1493]: eth0: Gained carrier Nov 4 23:55:07.517420 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:55:07.517866 systemd-networkd[1493]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:07.533038 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:55:07.533673 systemd-networkd[1493]: eth0: Overlong DHCP hostname received, shortened from 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8.c.flatcar-212911.internal' to 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:55:07.533697 systemd-networkd[1493]: eth0: DHCPv4 address 10.128.0.112/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 4 23:55:07.617713 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 4 23:55:07.617846 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 4 23:55:07.646366 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:55:07.666358 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 4 23:55:07.685781 kernel: ACPI: button: Sleep Button [SLPF] Nov 4 23:55:07.797186 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:55:07.865391 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:55:08.045629 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:55:08.115699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:08.220552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 4 23:55:08.238987 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:55:08.264895 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:55:08.274964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:55:08.281609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:55:08.300839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:55:08.312032 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:55:08.347042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:08.359883 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:55:08.368678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:55:08.378550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:55:08.388521 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:55:08.398704 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:55:08.407684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:55:08.417517 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:55:08.427493 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:55:08.427559 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:55:08.435579 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:55:08.445253 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:55:08.456236 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:55:08.466951 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:55:08.477771 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:55:08.488516 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:55:08.506321 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:55:08.516749 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:55:08.528481 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:55:08.538559 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:55:08.547500 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:55:08.555581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:08.555638 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:08.557274 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:55:08.579013 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:55:08.593878 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:55:08.611887 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:55:08.640155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:55:08.652043 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:55:08.660531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:55:08.665500 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:55:08.680699 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:55:08.683279 jq[1558]: false Nov 4 23:55:08.685511 coreos-metadata[1555]: Nov 04 23:55:08.684 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 4 23:55:08.686811 coreos-metadata[1555]: Nov 04 23:55:08.686 INFO Fetch successful Nov 4 23:55:08.687735 coreos-metadata[1555]: Nov 04 23:55:08.687 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 4 23:55:08.690366 coreos-metadata[1555]: Nov 04 23:55:08.688 INFO Fetch successful Nov 4 23:55:08.690366 coreos-metadata[1555]: Nov 04 23:55:08.688 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 4 23:55:08.690366 coreos-metadata[1555]: Nov 04 23:55:08.689 INFO Fetch successful Nov 4 23:55:08.690546 coreos-metadata[1555]: Nov 04 23:55:08.690 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 4 23:55:08.691124 coreos-metadata[1555]: Nov 04 23:55:08.690 INFO Fetch successful Nov 4 23:55:08.692703 systemd[1]: Started ntpd.service - Network Time Service. Nov 4 23:55:08.702455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:55:08.703621 extend-filesystems[1561]: Found /dev/sda6 Nov 4 23:55:08.734559 extend-filesystems[1561]: Found /dev/sda9 Nov 4 23:55:08.726169 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Nov 4 23:55:08.715434 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Nov 4 23:55:08.741314 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:08.743881 extend-filesystems[1561]: Checking size of /dev/sda9 Nov 4 23:55:08.732821 oslogin_cache_refresh[1562]: Failure getting users, quitting Nov 4 23:55:08.729736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:55:08.732852 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:08.761583 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:55:08.732943 oslogin_cache_refresh[1562]: Refreshing group entry cache Nov 4 23:55:08.739281 oslogin_cache_refresh[1562]: Failure getting groups, quitting Nov 4 23:55:08.739303 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:08.770184 extend-filesystems[1561]: Resized partition /dev/sda9 Nov 4 23:55:08.774394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 4 23:55:08.775265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:55:08.777602 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:55:08.785075 extend-filesystems[1587]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:55:08.796298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:55:08.812995 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2604027 blocks Nov 4 23:55:08.820448 ntpd[1567]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 21:31:21 UTC 2025 (1): Starting Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 21:31:21 UTC 2025 (1): Starting Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: ---------------------------------------------------- Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: ntp-4 is maintained by Network Time Foundation, Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: corporation. Support and training for ntp-4 are Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: available at https://www.nwtime.org/support Nov 4 23:55:08.821197 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: ---------------------------------------------------- Nov 4 23:55:08.820537 ntpd[1567]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 23:55:08.820554 ntpd[1567]: ---------------------------------------------------- Nov 4 23:55:08.820568 ntpd[1567]: ntp-4 is maintained by Network Time Foundation, Nov 4 23:55:08.820582 ntpd[1567]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 23:55:08.820596 ntpd[1567]: corporation. Support and training for ntp-4 are Nov 4 23:55:08.820610 ntpd[1567]: available at https://www.nwtime.org/support Nov 4 23:55:08.820624 ntpd[1567]: ---------------------------------------------------- Nov 4 23:55:08.831445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:55:08.835286 ntpd[1567]: proto: precision = 0.076 usec (-24) Nov 4 23:55:08.837531 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: proto: precision = 0.076 usec (-24) Nov 4 23:55:08.841067 ntpd[1567]: basedate set to 2025-10-23 Nov 4 23:55:08.841364 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: basedate set to 2025-10-23 Nov 4 23:55:08.841364 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: gps base set to 2025-10-26 (week 2390) Nov 4 23:55:08.841106 ntpd[1567]: gps base set to 2025-10-26 (week 2390) Nov 4 23:55:08.841659 ntpd[1567]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 23:55:08.841721 ntpd[1567]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 23:55:08.841788 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 23:55:08.841788 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 23:55:08.841997 ntpd[1567]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 23:55:08.842053 ntpd[1567]: Listen normally on 3 eth0 10.128.0.112:123 Nov 4 23:55:08.842158 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 23:55:08.842158 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Listen normally on 3 eth0 10.128.0.112:123 Nov 4 23:55:08.842158 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: Listen normally on 4 lo [::1]:123 Nov 4 23:55:08.842158 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: bind(21) AF_INET6 [fe80::4001:aff:fe80:70%2]:123 flags 0x811 failed: Cannot assign requested address Nov 4 23:55:08.842098 ntpd[1567]: Listen normally on 4 lo [::1]:123 Nov 4 23:55:08.842395 ntpd[1567]: 4 Nov 23:55:08 ntpd[1567]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:70%2]:123 Nov 4 23:55:08.842143 ntpd[1567]: bind(21) AF_INET6 [fe80::4001:aff:fe80:70%2]:123 flags 0x811 failed: Cannot assign requested address Nov 4 23:55:08.842171 ntpd[1567]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:70%2]:123 Nov 4 23:55:08.853411 update_engine[1585]: I20251104 23:55:08.853261 1585 main.cc:92] Flatcar Update Engine starting Nov 4 23:55:08.899269 kernel: ntpd[1567]: segfault at 24 ip 00005558c9a9aaeb sp 00007ffd7a2cd720 error 4 in ntpd[68aeb,5558c9a38000+80000] likely on CPU 0 (core 0, socket 0) Nov 4 23:55:08.899414 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 4 23:55:08.899454 kernel: EXT4-fs (sda9): resized filesystem to 2604027 Nov 4 23:55:08.882586 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:55:08.902918 jq[1588]: true Nov 4 23:55:08.883835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:55:08.884369 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:55:08.884687 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:55:08.894109 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:55:08.896512 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:55:08.898530 systemd-coredump[1600]: Process 1567 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 4 23:55:08.918383 extend-filesystems[1587]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 23:55:08.918383 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 4 23:55:08.918383 extend-filesystems[1587]: The filesystem on /dev/sda9 is now 2604027 (4k) blocks long. Nov 4 23:55:08.964884 extend-filesystems[1561]: Resized filesystem in /dev/sda9 Nov 4 23:55:08.920855 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:55:08.923652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:55:08.943035 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:55:08.944904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:55:09.008902 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:55:09.056846 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:55:09.062864 jq[1606]: true Nov 4 23:55:09.156099 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:55:09.167948 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 4 23:55:09.179140 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:55:09.185126 systemd[1]: Started systemd-coredump@0-1600-0.service - Process Core Dump (PID 1600/UID 0). Nov 4 23:55:09.196978 tar[1603]: linux-amd64/LICENSE Nov 4 23:55:09.197477 tar[1603]: linux-amd64/helm Nov 4 23:55:09.201796 systemd-logind[1581]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:55:09.201836 systemd-logind[1581]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 4 23:55:09.201868 systemd-logind[1581]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:55:09.207300 systemd-logind[1581]: New seat seat0. Nov 4 23:55:09.220704 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:55:09.253125 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:55:09.259181 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:55:09.269906 systemd-networkd[1493]: eth0: Gained IPv6LL Nov 4 23:55:09.276952 systemd[1]: Starting sshkeys.service... Nov 4 23:55:09.287464 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:55:09.297689 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:55:09.320188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:09.338446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:55:09.353840 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 4 23:55:09.394703 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 23:55:09.430207 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 23:55:09.449365 init.sh[1647]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 4 23:55:09.455356 init.sh[1647]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 4 23:55:09.464888 init.sh[1647]: + /usr/bin/google_instance_setup Nov 4 23:55:09.466989 dbus-daemon[1556]: [system] SELinux support is enabled Nov 4 23:55:09.479246 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:55:09.494836 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:55:09.495069 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:55:09.505639 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:55:09.505868 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:55:09.513517 dbus-daemon[1556]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1493 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 4 23:55:09.540010 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 4 23:55:09.552371 update_engine[1585]: I20251104 23:55:09.550029 1585 update_check_scheduler.cc:74] Next update check in 5m0s Nov 4 23:55:09.556740 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:55:09.573450 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 4 23:55:09.589291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:55:09.600089 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:55:09.738161 coreos-metadata[1650]: Nov 04 23:55:09.737 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 4 23:55:09.739506 coreos-metadata[1650]: Nov 04 23:55:09.739 INFO Fetch failed with 404: resource not found Nov 4 23:55:09.744133 coreos-metadata[1650]: Nov 04 23:55:09.742 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 4 23:55:09.744724 coreos-metadata[1650]: Nov 04 23:55:09.744 INFO Fetch successful Nov 4 23:55:09.747574 coreos-metadata[1650]: Nov 04 23:55:09.745 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 4 23:55:09.753983 coreos-metadata[1650]: Nov 04 23:55:09.750 INFO Fetch failed with 404: resource not found Nov 4 23:55:09.753983 coreos-metadata[1650]: Nov 04 23:55:09.753 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 4 23:55:09.753983 coreos-metadata[1650]: Nov 04 23:55:09.753 INFO Fetch failed with 404: resource not found Nov 4 23:55:09.753983 coreos-metadata[1650]: Nov 04 23:55:09.753 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 4 23:55:09.760129 coreos-metadata[1650]: Nov 04 23:55:09.758 INFO Fetch successful Nov 4 23:55:09.762359 unknown[1650]: wrote ssh authorized keys file for user: core Nov 4 23:55:09.807544 systemd-coredump[1636]: Process 1567 (ntpd) of user 0 dumped core. Module /bin/ntpd without build-id. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Stack trace of thread 1567: #0 0x00005558c9a9aaeb n/a (/bin/ntpd + 0x68aeb) #1 0x00005558c9a43cdf n/a (/bin/ntpd + 0x11cdf) #2 0x00005558c9a44575 n/a (/bin/ntpd + 0x12575) #3 0x00005558c9a3fd8a n/a (/bin/ntpd + 0xdd8a) #4 0x00005558c9a415d3 n/a (/bin/ntpd + 0xf5d3) #5 0x00005558c9a49fd1 n/a (/bin/ntpd + 0x17fd1) #6 0x00005558c9a3ac2d n/a (/bin/ntpd + 0x8c2d) #7 0x00007fb53ea1916c n/a (libc.so.6 + 0x2716c) #8 0x00007fb53ea19229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005558c9a3ac55 n/a (/bin/ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 4 23:55:09.809289 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 4 23:55:09.809591 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 4 23:55:09.827158 systemd[1]: systemd-coredump@0-1600-0.service: Deactivated successfully. Nov 4 23:55:09.835877 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 4 23:55:09.843102 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 4 23:55:09.864954 dbus-daemon[1556]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1660 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 4 23:55:09.887594 systemd[1]: Starting polkit.service - Authorization Manager... Nov 4 23:55:09.906705 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:55:09.910110 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 23:55:09.929967 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:09.932892 systemd[1]: Finished sshkeys.service. Nov 4 23:55:09.950892 systemd[1]: Started ntpd.service - Network Time Service. Nov 4 23:55:10.021305 ntpd[1679]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 21:31:21 UTC 2025 (1): Starting Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 21:31:21 UTC 2025 (1): Starting Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: ---------------------------------------------------- Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: ntp-4 is maintained by Network Time Foundation, Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: corporation. Support and training for ntp-4 are Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: available at https://www.nwtime.org/support Nov 4 23:55:10.023201 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: ---------------------------------------------------- Nov 4 23:55:10.022443 ntpd[1679]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: proto: precision = 0.073 usec (-24) Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: basedate set to 2025-10-23 Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: gps base set to 2025-10-26 (week 2390) Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 23:55:10.024645 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen normally on 3 eth0 10.128.0.112:123 Nov 4 23:55:10.022471 ntpd[1679]: ---------------------------------------------------- Nov 4 23:55:10.022486 ntpd[1679]: ntp-4 is maintained by Network Time Foundation, Nov 4 23:55:10.022499 ntpd[1679]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 23:55:10.037939 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen normally on 4 lo [::1]:123 Nov 4 23:55:10.037939 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:70%2]:123 Nov 4 23:55:10.037939 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: Listening on routing socket on fd #22 for interface updates Nov 4 23:55:10.037939 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 23:55:10.037939 ntpd[1679]: 4 Nov 23:55:10 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 23:55:10.022513 ntpd[1679]: corporation. Support and training for ntp-4 are Nov 4 23:55:10.022526 ntpd[1679]: available at https://www.nwtime.org/support Nov 4 23:55:10.022540 ntpd[1679]: ---------------------------------------------------- Nov 4 23:55:10.023532 ntpd[1679]: proto: precision = 0.073 usec (-24) Nov 4 23:55:10.023844 ntpd[1679]: basedate set to 2025-10-23 Nov 4 23:55:10.023864 ntpd[1679]: gps base set to 2025-10-26 (week 2390) Nov 4 23:55:10.023986 ntpd[1679]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 23:55:10.024026 ntpd[1679]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 23:55:10.024262 ntpd[1679]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 23:55:10.024301 ntpd[1679]: Listen normally on 3 eth0 10.128.0.112:123 Nov 4 23:55:10.026428 ntpd[1679]: Listen normally on 4 lo [::1]:123 Nov 4 23:55:10.026496 ntpd[1679]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:70%2]:123 Nov 4 23:55:10.026536 ntpd[1679]: Listening on routing socket on fd #22 for interface updates Nov 4 23:55:10.028281 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 23:55:10.028315 ntpd[1679]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 23:55:10.202043 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:55:10.319131 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:55:10.327263 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:55:10.345542 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:55:10.359797 systemd[1]: Started sshd@0-10.128.0.112:22-139.178.68.195:56192.service - OpenSSH per-connection server daemon (139.178.68.195:56192). Nov 4 23:55:10.382419 containerd[1607]: time="2025-11-04T23:55:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:55:10.386217 containerd[1607]: time="2025-11-04T23:55:10.383910971Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:55:10.445881 polkitd[1676]: Started polkitd version 126 Nov 4 23:55:10.448688 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:55:10.449817 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:55:10.467262 containerd[1607]: time="2025-11-04T23:55:10.466548951Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.61µs" Nov 4 23:55:10.467262 containerd[1607]: time="2025-11-04T23:55:10.466599386Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:55:10.467262 containerd[1607]: time="2025-11-04T23:55:10.466638338Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:55:10.466715 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:55:10.469172 containerd[1607]: time="2025-11-04T23:55:10.469125333Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:55:10.469273 containerd[1607]: time="2025-11-04T23:55:10.469178889Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:55:10.469273 containerd[1607]: time="2025-11-04T23:55:10.469225461Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:10.469421 containerd[1607]: time="2025-11-04T23:55:10.469356405Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:10.469421 containerd[1607]: time="2025-11-04T23:55:10.469379413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:10.474287 containerd[1607]: time="2025-11-04T23:55:10.474229572Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:10.474287 containerd[1607]: time="2025-11-04T23:55:10.474281285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:10.474956 containerd[1607]: time="2025-11-04T23:55:10.474306435Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:10.474956 containerd[1607]: time="2025-11-04T23:55:10.474319629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:55:10.474956 containerd[1607]: time="2025-11-04T23:55:10.474638356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:55:10.475150 containerd[1607]: time="2025-11-04T23:55:10.474975200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:10.475150 containerd[1607]: time="2025-11-04T23:55:10.475031189Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:10.475150 containerd[1607]: time="2025-11-04T23:55:10.475050273Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:55:10.475150 containerd[1607]: time="2025-11-04T23:55:10.475101266Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:55:10.480397 containerd[1607]: time="2025-11-04T23:55:10.478095052Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:55:10.480397 containerd[1607]: time="2025-11-04T23:55:10.478218597Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493575324Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493665974Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493691483Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493714675Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493736215Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493754149Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493774167Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493793319Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493818525Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493836191Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493852594Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.493874059Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.494047720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:55:10.494831 containerd[1607]: time="2025-11-04T23:55:10.494077696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494103325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494123362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494143605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494176726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494200108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494217652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494238731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494258242Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494277020Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.494799459Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.496009671Z" level=info msg="Start snapshots syncer" Nov 4 23:55:10.496750 containerd[1607]: time="2025-11-04T23:55:10.496089682Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:55:10.501464 containerd[1607]: time="2025-11-04T23:55:10.499838206Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:55:10.501464 containerd[1607]: time="2025-11-04T23:55:10.501399039Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:55:10.500150 polkitd[1676]: Loading rules from directory /etc/polkit-1/rules.d Nov 4 23:55:10.500899 polkitd[1676]: Loading rules from directory /run/polkit-1/rules.d Nov 4 23:55:10.500972 polkitd[1676]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.503440793Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504107840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504152726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504173110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504203361Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504224582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504244412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:55:10.505375 containerd[1607]: time="2025-11-04T23:55:10.504264890Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508541553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508593664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508622682Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508665362Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508704004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508720027Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508737624Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508752494Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508768626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508785384Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508814067Z" level=info msg="runtime interface created" Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508824499Z" level=info msg="created NRI interface" Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508866655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508893590Z" level=info msg="Connect containerd service" Nov 4 23:55:10.510399 containerd[1607]: time="2025-11-04T23:55:10.508944441Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:55:10.511146 polkitd[1676]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 4 23:55:10.511218 polkitd[1676]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 23:55:10.511277 polkitd[1676]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 4 23:55:10.516412 containerd[1607]: time="2025-11-04T23:55:10.515941538Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:55:10.516152 polkitd[1676]: Finished loading, compiling and executing 2 rules Nov 4 23:55:10.517686 systemd[1]: Started polkit.service - Authorization Manager. Nov 4 23:55:10.520271 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 4 23:55:10.523675 polkitd[1676]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 4 23:55:10.584795 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:55:10.601131 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:55:10.615722 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:55:10.625903 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:55:10.645303 systemd-hostnamed[1660]: Hostname set to (transient) Nov 4 23:55:10.647276 systemd-resolved[1277]: System hostname changed to 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8'. Nov 4 23:55:10.899188 containerd[1607]: time="2025-11-04T23:55:10.899144052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901396981Z" level=info msg="Start subscribing containerd event" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901566270Z" level=info msg="Start recovering state" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901684078Z" level=info msg="Start event monitor" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901705103Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901718459Z" level=info msg="Start streaming server" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901734667Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901747533Z" level=info msg="runtime interface starting up..." Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901757075Z" level=info msg="starting plugins..." Nov 4 23:55:10.902232 containerd[1607]: time="2025-11-04T23:55:10.901775537Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:55:10.904032 containerd[1607]: time="2025-11-04T23:55:10.904001749Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:55:10.904777 containerd[1607]: time="2025-11-04T23:55:10.904751674Z" level=info msg="containerd successfully booted in 0.529667s" Nov 4 23:55:10.904964 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:55:11.016445 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 56192 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:11.022128 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:11.047906 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:55:11.059849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:55:11.100444 systemd-logind[1581]: New session 1 of user core. Nov 4 23:55:11.121201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:55:11.127844 tar[1603]: linux-amd64/README.md Nov 4 23:55:11.144212 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:55:11.172983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:55:11.190814 (systemd)[1733]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:55:11.196751 systemd-logind[1581]: New session c1 of user core. Nov 4 23:55:11.209646 instance-setup[1653]: INFO Running google_set_multiqueue. Nov 4 23:55:11.240724 instance-setup[1653]: INFO Set channels for eth0 to 2. Nov 4 23:55:11.247685 instance-setup[1653]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 4 23:55:11.251781 instance-setup[1653]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 4 23:55:11.251865 instance-setup[1653]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 4 23:55:11.254814 instance-setup[1653]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 4 23:55:11.255609 instance-setup[1653]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 4 23:55:11.257892 instance-setup[1653]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 4 23:55:11.258540 instance-setup[1653]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 4 23:55:11.260712 instance-setup[1653]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 4 23:55:11.270294 instance-setup[1653]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 4 23:55:11.277573 instance-setup[1653]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 4 23:55:11.278900 instance-setup[1653]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 4 23:55:11.278950 instance-setup[1653]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 4 23:55:11.309250 init.sh[1647]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 4 23:55:11.526457 systemd[1733]: Queued start job for default target default.target. Nov 4 23:55:11.532237 systemd[1733]: Created slice app.slice - User Application Slice. Nov 4 23:55:11.532296 systemd[1733]: Reached target paths.target - Paths. Nov 4 23:55:11.532558 systemd[1733]: Reached target timers.target - Timers. Nov 4 23:55:11.536503 systemd[1733]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:55:11.572727 systemd[1733]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:55:11.572933 systemd[1733]: Reached target sockets.target - Sockets. Nov 4 23:55:11.573311 systemd[1733]: Reached target basic.target - Basic System. Nov 4 23:55:11.573433 systemd[1733]: Reached target default.target - Main User Target. Nov 4 23:55:11.573487 systemd[1733]: Startup finished in 362ms. Nov 4 23:55:11.574135 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:55:11.583158 startup-script[1768]: INFO Starting startup scripts. Nov 4 23:55:11.589766 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:55:11.590679 startup-script[1768]: INFO No startup scripts found in metadata. Nov 4 23:55:11.590780 startup-script[1768]: INFO Finished running startup scripts. Nov 4 23:55:11.627502 init.sh[1647]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 4 23:55:11.627502 init.sh[1647]: + daemon_pids=() Nov 4 23:55:11.627502 init.sh[1647]: + for d in accounts clock_skew network Nov 4 23:55:11.628157 init.sh[1647]: + daemon_pids+=($!) Nov 4 23:55:11.628157 init.sh[1647]: + for d in accounts clock_skew network Nov 4 23:55:11.628274 init.sh[1774]: + /usr/bin/google_accounts_daemon Nov 4 23:55:11.629043 init.sh[1647]: + daemon_pids+=($!) Nov 4 23:55:11.629043 init.sh[1647]: + for d in accounts clock_skew network Nov 4 23:55:11.629043 init.sh[1647]: + daemon_pids+=($!) Nov 4 23:55:11.629043 init.sh[1647]: + NOTIFY_SOCKET=/run/systemd/notify Nov 4 23:55:11.629043 init.sh[1647]: + /usr/bin/systemd-notify --ready Nov 4 23:55:11.629284 init.sh[1775]: + /usr/bin/google_clock_skew_daemon Nov 4 23:55:11.629642 init.sh[1776]: + /usr/bin/google_network_daemon Nov 4 23:55:11.648691 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 4 23:55:11.661524 init.sh[1647]: + wait -n 1774 1775 1776 Nov 4 23:55:11.848982 systemd[1]: Started sshd@1-10.128.0.112:22-139.178.68.195:56200.service - OpenSSH per-connection server daemon (139.178.68.195:56200). Nov 4 23:55:12.140659 google-clock-skew[1775]: INFO Starting Google Clock Skew daemon. Nov 4 23:55:12.148016 google-networking[1776]: INFO Starting Google Networking daemon. Nov 4 23:55:12.152141 google-clock-skew[1775]: INFO Clock drift token has changed: 0. Nov 4 23:55:12.211809 groupadd[1792]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 4 23:55:12.217601 groupadd[1792]: group added to /etc/gshadow: name=google-sudoers Nov 4 23:55:12.225884 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 56200 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:12.228755 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:12.241692 systemd-logind[1581]: New session 2 of user core. Nov 4 23:55:12.247747 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:55:12.255390 init.sh[1795]: [sss_cache] [ldb] (0x0010): Unable to find backend for '/var/lib/sss/db/config.ldb' - do you need to set LDB_MODULES_PATH? Nov 4 23:55:12.255390 init.sh[1795]: [sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb] Nov 4 23:55:12.255390 init.sh[1795]: Could not open available domains Nov 4 23:55:12.256637 groupadd[1792]: groupadd: sss_cache exited with status 5 Nov 4 23:55:12.256652 groupadd[1792]: groupadd: Failed to flush the sssd cache. Nov 4 23:55:12.256689 groupadd[1792]: new group: name=google-sudoers, GID=1000 Nov 4 23:55:12.270483 init.sh[1798]: [sss_cache] [ldb] (0x0010): Unable to find backend for '/var/lib/sss/db/config.ldb' - do you need to set LDB_MODULES_PATH? Nov 4 23:55:12.270483 init.sh[1798]: [sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb] Nov 4 23:55:12.270483 init.sh[1798]: Could not open available domains Nov 4 23:55:12.271049 groupadd[1792]: groupadd: sss_cache exited with status 5 Nov 4 23:55:12.271059 groupadd[1792]: groupadd: Failed to flush the sssd cache. Nov 4 23:55:12.288981 google-accounts[1774]: INFO Starting Google Accounts daemon. Nov 4 23:55:12.301642 google-accounts[1774]: WARNING OS Login not installed. Nov 4 23:55:12.303921 google-accounts[1774]: INFO Creating a new user account for 0. Nov 4 23:55:12.311665 init.sh[1802]: useradd: invalid user name '0': use --badname to ignore Nov 4 23:55:12.312521 google-accounts[1774]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 4 23:55:12.342652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:12.353404 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:55:12.359888 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:12.362794 systemd[1]: Startup finished in 2.298s (kernel) + 20.936s (initrd) + 9.886s (userspace) = 33.122s. Nov 4 23:55:12.444156 sshd[1797]: Connection closed by 139.178.68.195 port 56200 Nov 4 23:55:12.445029 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:12.453649 systemd[1]: sshd@1-10.128.0.112:22-139.178.68.195:56200.service: Deactivated successfully. Nov 4 23:55:12.456765 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:55:12.459132 systemd-logind[1581]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:55:12.461015 systemd-logind[1581]: Removed session 2. Nov 4 23:55:12.504271 systemd[1]: Started sshd@2-10.128.0.112:22-139.178.68.195:56216.service - OpenSSH per-connection server daemon (139.178.68.195:56216). Nov 4 23:55:12.816538 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 56216 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:12.818515 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:12.828666 systemd-logind[1581]: New session 3 of user core. Nov 4 23:55:12.835601 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:55:13.027215 sshd[1825]: Connection closed by 139.178.68.195 port 56216 Nov 4 23:55:13.028125 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:13.034923 systemd[1]: sshd@2-10.128.0.112:22-139.178.68.195:56216.service: Deactivated successfully. Nov 4 23:55:13.038005 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:55:13.041618 systemd-logind[1581]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:55:13.043795 systemd-logind[1581]: Removed session 3. Nov 4 23:55:13.086057 systemd[1]: Started sshd@3-10.128.0.112:22-139.178.68.195:56218.service - OpenSSH per-connection server daemon (139.178.68.195:56218). Nov 4 23:55:13.000085 systemd-resolved[1277]: Clock change detected. Flushing caches. Nov 4 23:55:13.015716 systemd-journald[1225]: Time jumped backwards, rotating. Nov 4 23:55:13.003125 google-clock-skew[1775]: INFO Synced system time with hardware clock. Nov 4 23:55:13.134446 kubelet[1808]: E1104 23:55:13.134386 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:13.137728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:13.137978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:13.138586 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 266.5M memory peak. Nov 4 23:55:13.268452 sshd[1831]: Accepted publickey for core from 139.178.68.195 port 56218 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:13.270066 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:13.278361 systemd-logind[1581]: New session 4 of user core. Nov 4 23:55:13.288568 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:55:13.492477 sshd[1837]: Connection closed by 139.178.68.195 port 56218 Nov 4 23:55:13.493344 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:13.499494 systemd[1]: sshd@3-10.128.0.112:22-139.178.68.195:56218.service: Deactivated successfully. Nov 4 23:55:13.502125 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:55:13.503379 systemd-logind[1581]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:55:13.505428 systemd-logind[1581]: Removed session 4. Nov 4 23:55:13.544595 systemd[1]: Started sshd@4-10.128.0.112:22-139.178.68.195:47544.service - OpenSSH per-connection server daemon (139.178.68.195:47544). Nov 4 23:55:13.856082 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 47544 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:13.857781 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:13.865322 systemd-logind[1581]: New session 5 of user core. Nov 4 23:55:13.871502 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:55:14.052699 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:55:14.053202 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:14.069647 sudo[1847]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:14.113053 sshd[1846]: Connection closed by 139.178.68.195 port 47544 Nov 4 23:55:14.114585 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:14.120023 systemd[1]: sshd@4-10.128.0.112:22-139.178.68.195:47544.service: Deactivated successfully. Nov 4 23:55:14.122693 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:55:14.124666 systemd-logind[1581]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:55:14.126857 systemd-logind[1581]: Removed session 5. Nov 4 23:55:14.166956 systemd[1]: Started sshd@5-10.128.0.112:22-139.178.68.195:47556.service - OpenSSH per-connection server daemon (139.178.68.195:47556). Nov 4 23:55:14.466669 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 47556 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:14.468135 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:14.475486 systemd-logind[1581]: New session 6 of user core. Nov 4 23:55:14.484541 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:55:14.647512 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:55:14.648018 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:14.700573 sudo[1858]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:14.716486 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:55:14.716977 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:14.730959 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:14.780681 augenrules[1880]: No rules Nov 4 23:55:14.782394 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:14.782725 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:14.784232 sudo[1857]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:14.827674 sshd[1856]: Connection closed by 139.178.68.195 port 47556 Nov 4 23:55:14.828586 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:14.834058 systemd[1]: sshd@5-10.128.0.112:22-139.178.68.195:47556.service: Deactivated successfully. Nov 4 23:55:14.836807 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:55:14.839074 systemd-logind[1581]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:55:14.840969 systemd-logind[1581]: Removed session 6. Nov 4 23:55:14.892945 systemd[1]: Started sshd@6-10.128.0.112:22-139.178.68.195:47572.service - OpenSSH per-connection server daemon (139.178.68.195:47572). Nov 4 23:55:15.204734 sshd[1889]: Accepted publickey for core from 139.178.68.195 port 47572 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:55:15.206432 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:15.213858 systemd-logind[1581]: New session 7 of user core. Nov 4 23:55:15.221552 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:55:15.384615 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:55:15.385126 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:15.924682 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:55:15.943859 (dockerd)[1911]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:55:16.311212 dockerd[1911]: time="2025-11-04T23:55:16.311105731Z" level=info msg="Starting up" Nov 4 23:55:16.313026 dockerd[1911]: time="2025-11-04T23:55:16.312704114Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:55:16.332073 dockerd[1911]: time="2025-11-04T23:55:16.331998147Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:55:16.555855 systemd[1]: var-lib-docker-metacopy\x2dcheck220210584-merged.mount: Deactivated successfully. Nov 4 23:55:16.579414 dockerd[1911]: time="2025-11-04T23:55:16.578761309Z" level=info msg="Loading containers: start." Nov 4 23:55:16.598311 kernel: Initializing XFRM netlink socket Nov 4 23:55:16.946592 systemd-networkd[1493]: docker0: Link UP Nov 4 23:55:16.952994 dockerd[1911]: time="2025-11-04T23:55:16.952930571Z" level=info msg="Loading containers: done." Nov 4 23:55:16.972847 dockerd[1911]: time="2025-11-04T23:55:16.972731263Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:55:16.973040 dockerd[1911]: time="2025-11-04T23:55:16.972855454Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:55:16.973040 dockerd[1911]: time="2025-11-04T23:55:16.972999159Z" level=info msg="Initializing buildkit" Nov 4 23:55:17.007394 dockerd[1911]: time="2025-11-04T23:55:17.007319452Z" level=info msg="Completed buildkit initialization" Nov 4 23:55:17.016609 dockerd[1911]: time="2025-11-04T23:55:17.016524226Z" level=info msg="Daemon has completed initialization" Nov 4 23:55:17.016783 dockerd[1911]: time="2025-11-04T23:55:17.016601538Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:55:17.017261 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:55:17.921296 containerd[1607]: time="2025-11-04T23:55:17.921234329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 4 23:55:18.487527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605346398.mount: Deactivated successfully. Nov 4 23:55:20.139411 containerd[1607]: time="2025-11-04T23:55:20.139336872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:20.141110 containerd[1607]: time="2025-11-04T23:55:20.141004440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28845499" Nov 4 23:55:20.142596 containerd[1607]: time="2025-11-04T23:55:20.142548560Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:20.148300 containerd[1607]: time="2025-11-04T23:55:20.147005727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:20.148504 containerd[1607]: time="2025-11-04T23:55:20.148264561Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.226954493s" Nov 4 23:55:20.148632 containerd[1607]: time="2025-11-04T23:55:20.148599904Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 4 23:55:20.149366 containerd[1607]: time="2025-11-04T23:55:20.149330367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 4 23:55:21.653262 containerd[1607]: time="2025-11-04T23:55:21.653191435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:21.654813 containerd[1607]: time="2025-11-04T23:55:21.654583109Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24788961" Nov 4 23:55:21.656428 containerd[1607]: time="2025-11-04T23:55:21.656384804Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:21.659793 containerd[1607]: time="2025-11-04T23:55:21.659740554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:21.661108 containerd[1607]: time="2025-11-04T23:55:21.661060005Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.51168171s" Nov 4 23:55:21.661227 containerd[1607]: time="2025-11-04T23:55:21.661112692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 4 23:55:21.662169 containerd[1607]: time="2025-11-04T23:55:21.662125608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 4 23:55:22.900845 containerd[1607]: time="2025-11-04T23:55:22.900774046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:22.902496 containerd[1607]: time="2025-11-04T23:55:22.902113083Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19178205" Nov 4 23:55:22.903690 containerd[1607]: time="2025-11-04T23:55:22.903648676Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:22.907253 containerd[1607]: time="2025-11-04T23:55:22.907204240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:22.909286 containerd[1607]: time="2025-11-04T23:55:22.909220038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.247052141s" Nov 4 23:55:22.909479 containerd[1607]: time="2025-11-04T23:55:22.909438869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 4 23:55:22.912109 containerd[1607]: time="2025-11-04T23:55:22.912063435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 4 23:55:23.250144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:23.253328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:23.751153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:23.766839 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:23.869639 kubelet[2198]: E1104 23:55:23.869516 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:23.876627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:23.876869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:23.877918 systemd[1]: kubelet.service: Consumed 260ms CPU time, 110.2M memory peak. Nov 4 23:55:24.353678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340302635.mount: Deactivated successfully. Nov 4 23:55:25.050812 containerd[1607]: time="2025-11-04T23:55:25.050732268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:25.052377 containerd[1607]: time="2025-11-04T23:55:25.052078250Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30926101" Nov 4 23:55:25.053900 containerd[1607]: time="2025-11-04T23:55:25.053841523Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:25.056890 containerd[1607]: time="2025-11-04T23:55:25.056843555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:25.057739 containerd[1607]: time="2025-11-04T23:55:25.057695937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.145450469s" Nov 4 23:55:25.057902 containerd[1607]: time="2025-11-04T23:55:25.057873501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 4 23:55:25.058490 containerd[1607]: time="2025-11-04T23:55:25.058462243Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 4 23:55:25.495116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513989711.mount: Deactivated successfully. Nov 4 23:55:26.730772 containerd[1607]: time="2025-11-04T23:55:26.729753968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:26.730772 containerd[1607]: time="2025-11-04T23:55:26.729827341Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Nov 4 23:55:26.733229 containerd[1607]: time="2025-11-04T23:55:26.733171604Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:26.734455 containerd[1607]: time="2025-11-04T23:55:26.734390446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:26.736040 containerd[1607]: time="2025-11-04T23:55:26.735882116Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.677270367s" Nov 4 23:55:26.736040 containerd[1607]: time="2025-11-04T23:55:26.735926426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 4 23:55:26.737023 containerd[1607]: time="2025-11-04T23:55:26.736976224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:55:27.174284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996262090.mount: Deactivated successfully. Nov 4 23:55:27.180883 containerd[1607]: time="2025-11-04T23:55:27.180817135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:27.182081 containerd[1607]: time="2025-11-04T23:55:27.181799067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 4 23:55:27.183394 containerd[1607]: time="2025-11-04T23:55:27.183349606Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:27.186174 containerd[1607]: time="2025-11-04T23:55:27.186129177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:27.187439 containerd[1607]: time="2025-11-04T23:55:27.187218228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.206919ms" Nov 4 23:55:27.187439 containerd[1607]: time="2025-11-04T23:55:27.187261928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:55:27.188319 containerd[1607]: time="2025-11-04T23:55:27.188288404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 4 23:55:27.764534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351621954.mount: Deactivated successfully. Nov 4 23:55:30.218694 containerd[1607]: time="2025-11-04T23:55:30.218616484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:30.220292 containerd[1607]: time="2025-11-04T23:55:30.220154841Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Nov 4 23:55:30.222056 containerd[1607]: time="2025-11-04T23:55:30.221409407Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:30.225113 containerd[1607]: time="2025-11-04T23:55:30.225071545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:30.226552 containerd[1607]: time="2025-11-04T23:55:30.226505819Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.038070126s" Nov 4 23:55:30.226688 containerd[1607]: time="2025-11-04T23:55:30.226558611Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 4 23:55:33.158000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:33.158346 systemd[1]: kubelet.service: Consumed 260ms CPU time, 110.2M memory peak. Nov 4 23:55:33.161817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:33.209921 systemd[1]: Reload requested from client PID 2347 ('systemctl') (unit session-7.scope)... Nov 4 23:55:33.209947 systemd[1]: Reloading... Nov 4 23:55:33.410393 zram_generator::config[2393]: No configuration found. Nov 4 23:55:33.730357 systemd[1]: Reloading finished in 519 ms. Nov 4 23:55:33.814969 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:55:33.815103 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:55:33.815530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:33.815619 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.6M memory peak. Nov 4 23:55:33.817954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:34.248594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:34.263860 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:55:34.329921 kubelet[2444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:34.329921 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:55:34.329921 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:34.330576 kubelet[2444]: I1104 23:55:34.330032 2444 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:55:35.229389 kubelet[2444]: I1104 23:55:35.229328 2444 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 23:55:35.229389 kubelet[2444]: I1104 23:55:35.229366 2444 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:55:35.229848 kubelet[2444]: I1104 23:55:35.229799 2444 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 23:55:35.273305 kubelet[2444]: I1104 23:55:35.273253 2444 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:55:35.277300 kubelet[2444]: E1104 23:55:35.276133 2444 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:35.290310 kubelet[2444]: I1104 23:55:35.290259 2444 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:55:35.294571 kubelet[2444]: I1104 23:55:35.294536 2444 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:55:35.294966 kubelet[2444]: I1104 23:55:35.294901 2444 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:55:35.295219 kubelet[2444]: I1104 23:55:35.294949 2444 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:55:35.295219 kubelet[2444]: I1104 23:55:35.295215 2444 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:55:35.295484 kubelet[2444]: I1104 23:55:35.295234 2444 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 23:55:35.295484 kubelet[2444]: I1104 23:55:35.295470 2444 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:35.303842 kubelet[2444]: I1104 23:55:35.303786 2444 kubelet.go:446] "Attempting to sync node with API server" Nov 4 23:55:35.303842 kubelet[2444]: I1104 23:55:35.303852 2444 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:55:35.304032 kubelet[2444]: I1104 23:55:35.303889 2444 kubelet.go:352] "Adding apiserver pod source" Nov 4 23:55:35.304032 kubelet[2444]: I1104 23:55:35.303907 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:55:35.308231 kubelet[2444]: W1104 23:55:35.308151 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:35.309628 kubelet[2444]: E1104 23:55:35.308423 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:35.309628 kubelet[2444]: W1104 23:55:35.308585 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8&limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:35.309628 kubelet[2444]: E1104 23:55:35.308648 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8&limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:35.309628 kubelet[2444]: I1104 23:55:35.308774 2444 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:55:35.309628 kubelet[2444]: I1104 23:55:35.309441 2444 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 23:55:35.310860 kubelet[2444]: W1104 23:55:35.310829 2444 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:55:35.314402 kubelet[2444]: I1104 23:55:35.314377 2444 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:55:35.314568 kubelet[2444]: I1104 23:55:35.314555 2444 server.go:1287] "Started kubelet" Nov 4 23:55:35.318779 kubelet[2444]: I1104 23:55:35.318373 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:55:35.328586 kubelet[2444]: I1104 23:55:35.328553 2444 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:55:35.328888 kubelet[2444]: E1104 23:55:35.328856 2444 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" Nov 4 23:55:35.330387 kubelet[2444]: I1104 23:55:35.330341 2444 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:55:35.332482 kubelet[2444]: I1104 23:55:35.332445 2444 server.go:479] "Adding debug handlers to kubelet server" Nov 4 23:55:35.333251 kubelet[2444]: I1104 23:55:35.333223 2444 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:55:35.333385 kubelet[2444]: I1104 23:55:35.333333 2444 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:55:35.335752 kubelet[2444]: I1104 23:55:35.335670 2444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:55:35.336048 kubelet[2444]: I1104 23:55:35.336020 2444 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:55:35.343836 kubelet[2444]: E1104 23:55:35.341563 2444 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8.1874f2fff2e72543 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,UID:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,},FirstTimestamp:2025-11-04 23:55:35.314523459 +0000 UTC m=+1.042082258,LastTimestamp:2025-11-04 23:55:35.314523459 +0000 UTC m=+1.042082258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,}" Nov 4 23:55:35.344286 kubelet[2444]: I1104 23:55:35.344235 2444 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:55:35.344910 kubelet[2444]: E1104 23:55:35.344877 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8?timeout=10s\": dial tcp 10.128.0.112:6443: connect: connection refused" interval="200ms" Nov 4 23:55:35.346408 kubelet[2444]: W1104 23:55:35.346365 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:35.346593 kubelet[2444]: E1104 23:55:35.346564 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:35.347239 kubelet[2444]: I1104 23:55:35.347213 2444 factory.go:221] Registration of the systemd container factory successfully Nov 4 23:55:35.347492 kubelet[2444]: I1104 23:55:35.347466 2444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:55:35.349942 kubelet[2444]: I1104 23:55:35.349921 2444 factory.go:221] Registration of the containerd container factory successfully Nov 4 23:55:35.357198 kubelet[2444]: E1104 23:55:35.354537 2444 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:55:35.360777 kubelet[2444]: I1104 23:55:35.360694 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 23:55:35.365713 kubelet[2444]: I1104 23:55:35.363843 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 23:55:35.365836 kubelet[2444]: I1104 23:55:35.365725 2444 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 23:55:35.365836 kubelet[2444]: I1104 23:55:35.365753 2444 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:55:35.365836 kubelet[2444]: I1104 23:55:35.365765 2444 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 23:55:35.365958 kubelet[2444]: E1104 23:55:35.365840 2444 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:55:35.370508 kubelet[2444]: W1104 23:55:35.370457 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:35.370663 kubelet[2444]: E1104 23:55:35.370530 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:35.390034 kubelet[2444]: I1104 23:55:35.390003 2444 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:55:35.390348 kubelet[2444]: I1104 23:55:35.390329 2444 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:55:35.390495 kubelet[2444]: I1104 23:55:35.390480 2444 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:35.393109 kubelet[2444]: I1104 23:55:35.393083 2444 policy_none.go:49] "None policy: Start" Nov 4 23:55:35.393254 kubelet[2444]: I1104 23:55:35.393241 2444 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:55:35.393387 kubelet[2444]: I1104 23:55:35.393373 2444 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:55:35.401492 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:55:35.418706 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:55:35.424339 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:55:35.429857 kubelet[2444]: E1104 23:55:35.429817 2444 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" Nov 4 23:55:35.437758 kubelet[2444]: I1104 23:55:35.436432 2444 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 23:55:35.437758 kubelet[2444]: I1104 23:55:35.436721 2444 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:55:35.437758 kubelet[2444]: I1104 23:55:35.436739 2444 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:55:35.437758 kubelet[2444]: I1104 23:55:35.437015 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:55:35.439346 kubelet[2444]: E1104 23:55:35.439319 2444 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:55:35.439566 kubelet[2444]: E1104 23:55:35.439532 2444 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" Nov 4 23:55:35.487333 systemd[1]: Created slice kubepods-burstable-pod8f0da884f6fe84d5cd11abf61ec5cc89.slice - libcontainer container kubepods-burstable-pod8f0da884f6fe84d5cd11abf61ec5cc89.slice. Nov 4 23:55:35.497406 kubelet[2444]: E1104 23:55:35.497352 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.501337 systemd[1]: Created slice kubepods-burstable-podc3b5ada23b63dae1a634c984f31b2613.slice - libcontainer container kubepods-burstable-podc3b5ada23b63dae1a634c984f31b2613.slice. Nov 4 23:55:35.513185 kubelet[2444]: E1104 23:55:35.513140 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.517849 systemd[1]: Created slice kubepods-burstable-pod68471c8acaba133a1dd3ed743d8e35bf.slice - libcontainer container kubepods-burstable-pod68471c8acaba133a1dd3ed743d8e35bf.slice. Nov 4 23:55:35.520744 kubelet[2444]: E1104 23:55:35.520696 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.541262 kubelet[2444]: I1104 23:55:35.541218 2444 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.541791 kubelet[2444]: E1104 23:55:35.541735 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.112:6443/api/v1/nodes\": dial tcp 10.128.0.112:6443: connect: connection refused" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.546464 kubelet[2444]: E1104 23:55:35.546388 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8?timeout=10s\": dial tcp 10.128.0.112:6443: connect: connection refused" interval="400ms" Nov 4 23:55:35.635079 kubelet[2444]: I1104 23:55:35.634996 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-ca-certs\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635079 kubelet[2444]: I1104 23:55:35.635064 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-k8s-certs\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635079 kubelet[2444]: I1104 23:55:35.635094 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635491 kubelet[2444]: I1104 23:55:35.635123 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-ca-certs\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635491 kubelet[2444]: I1104 23:55:35.635154 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635491 kubelet[2444]: I1104 23:55:35.635181 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635491 kubelet[2444]: I1104 23:55:35.635209 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635642 kubelet[2444]: I1104 23:55:35.635249 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.635642 kubelet[2444]: I1104 23:55:35.635308 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68471c8acaba133a1dd3ed743d8e35bf-kubeconfig\") pod \"kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"68471c8acaba133a1dd3ed743d8e35bf\") " pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.746889 kubelet[2444]: I1104 23:55:35.746753 2444 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.747296 kubelet[2444]: E1104 23:55:35.747241 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.112:6443/api/v1/nodes\": dial tcp 10.128.0.112:6443: connect: connection refused" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:35.798791 containerd[1607]: time="2025-11-04T23:55:35.798718584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:8f0da884f6fe84d5cd11abf61ec5cc89,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:35.815299 containerd[1607]: time="2025-11-04T23:55:35.815229238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:c3b5ada23b63dae1a634c984f31b2613,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:35.825320 containerd[1607]: time="2025-11-04T23:55:35.824756458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:68471c8acaba133a1dd3ed743d8e35bf,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:35.835679 containerd[1607]: time="2025-11-04T23:55:35.835522472Z" level=info msg="connecting to shim 1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38" address="unix:///run/containerd/s/982a62d5304f17ceabf0c5edae2350dac39792bb54e9608e7577e7b7d8057f43" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:35.897551 systemd[1]: Started cri-containerd-1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38.scope - libcontainer container 1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38. Nov 4 23:55:35.900374 containerd[1607]: time="2025-11-04T23:55:35.899693516Z" level=info msg="connecting to shim e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3" address="unix:///run/containerd/s/1f6d56184ece62e474718d05563f461e7de7d8a55f576131eab1286b36a4935d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:35.930827 containerd[1607]: time="2025-11-04T23:55:35.930767379Z" level=info msg="connecting to shim 6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876" address="unix:///run/containerd/s/a7c066eea101a6ce687e7be6b3fee69122fc8be280bd8f5e81d49e5c6fca4ed2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:35.947249 kubelet[2444]: E1104 23:55:35.947192 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8?timeout=10s\": dial tcp 10.128.0.112:6443: connect: connection refused" interval="800ms" Nov 4 23:55:35.972553 systemd[1]: Started cri-containerd-e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3.scope - libcontainer container e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3. Nov 4 23:55:36.004684 systemd[1]: Started cri-containerd-6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876.scope - libcontainer container 6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876. Nov 4 23:55:36.034510 containerd[1607]: time="2025-11-04T23:55:36.034455974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:8f0da884f6fe84d5cd11abf61ec5cc89,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38\"" Nov 4 23:55:36.041318 kubelet[2444]: E1104 23:55:36.040704 2444 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d" Nov 4 23:55:36.045919 containerd[1607]: time="2025-11-04T23:55:36.045857801Z" level=info msg="CreateContainer within sandbox \"1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:55:36.062469 containerd[1607]: time="2025-11-04T23:55:36.062357601Z" level=info msg="Container a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:36.078294 containerd[1607]: time="2025-11-04T23:55:36.077900117Z" level=info msg="CreateContainer within sandbox \"1c71dc8e377150bcda12793121c77038ea7f7a16abb8bd8da607408aa4154d38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc\"" Nov 4 23:55:36.082288 containerd[1607]: time="2025-11-04T23:55:36.082216577Z" level=info msg="StartContainer for \"a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc\"" Nov 4 23:55:36.086516 containerd[1607]: time="2025-11-04T23:55:36.086416609Z" level=info msg="connecting to shim a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc" address="unix:///run/containerd/s/982a62d5304f17ceabf0c5edae2350dac39792bb54e9608e7577e7b7d8057f43" protocol=ttrpc version=3 Nov 4 23:55:36.129722 systemd[1]: Started cri-containerd-a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc.scope - libcontainer container a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc. Nov 4 23:55:36.151869 containerd[1607]: time="2025-11-04T23:55:36.151792756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:68471c8acaba133a1dd3ed743d8e35bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3\"" Nov 4 23:55:36.152331 kubelet[2444]: I1104 23:55:36.152234 2444 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:36.155408 kubelet[2444]: E1104 23:55:36.155360 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.112:6443/api/v1/nodes\": dial tcp 10.128.0.112:6443: connect: connection refused" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:36.156813 kubelet[2444]: E1104 23:55:36.156756 2444 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d" Nov 4 23:55:36.157850 containerd[1607]: time="2025-11-04T23:55:36.157808025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8,Uid:c3b5ada23b63dae1a634c984f31b2613,Namespace:kube-system,Attempt:0,} returns sandbox id \"6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876\"" Nov 4 23:55:36.159438 containerd[1607]: time="2025-11-04T23:55:36.159253371Z" level=info msg="CreateContainer within sandbox \"e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:55:36.160605 kubelet[2444]: W1104 23:55:36.160366 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:36.161192 kubelet[2444]: E1104 23:55:36.160814 2444 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36" Nov 4 23:55:36.163493 kubelet[2444]: E1104 23:55:36.161029 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:36.164312 containerd[1607]: time="2025-11-04T23:55:36.164177923Z" level=info msg="CreateContainer within sandbox \"6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:55:36.177864 containerd[1607]: time="2025-11-04T23:55:36.176011620Z" level=info msg="Container 342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:36.178672 containerd[1607]: time="2025-11-04T23:55:36.178629892Z" level=info msg="Container d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:36.191204 containerd[1607]: time="2025-11-04T23:55:36.190741735Z" level=info msg="CreateContainer within sandbox \"6812c048ec94302c6b69dc7a8756236d288b770ea6f85a58cb4819b3ce012876\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad\"" Nov 4 23:55:36.193285 containerd[1607]: time="2025-11-04T23:55:36.193214623Z" level=info msg="CreateContainer within sandbox \"e4fc9763b543b2f0f126f1713599004fbbaaec52091a5989cac37d9e9074a5c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b\"" Nov 4 23:55:36.193513 containerd[1607]: time="2025-11-04T23:55:36.193481423Z" level=info msg="StartContainer for \"d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad\"" Nov 4 23:55:36.195143 containerd[1607]: time="2025-11-04T23:55:36.195096218Z" level=info msg="connecting to shim d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad" address="unix:///run/containerd/s/a7c066eea101a6ce687e7be6b3fee69122fc8be280bd8f5e81d49e5c6fca4ed2" protocol=ttrpc version=3 Nov 4 23:55:36.196400 containerd[1607]: time="2025-11-04T23:55:36.196328063Z" level=info msg="StartContainer for \"342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b\"" Nov 4 23:55:36.202295 containerd[1607]: time="2025-11-04T23:55:36.200936154Z" level=info msg="connecting to shim 342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b" address="unix:///run/containerd/s/1f6d56184ece62e474718d05563f461e7de7d8a55f576131eab1286b36a4935d" protocol=ttrpc version=3 Nov 4 23:55:36.239581 systemd[1]: Started cri-containerd-342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b.scope - libcontainer container 342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b. Nov 4 23:55:36.251527 systemd[1]: Started cri-containerd-d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad.scope - libcontainer container d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad. Nov 4 23:55:36.281961 containerd[1607]: time="2025-11-04T23:55:36.280446025Z" level=info msg="StartContainer for \"a15c82898374cce950b23efef2d2d8d8e3ee6f005a3ab4318bf99485e4dbacdc\" returns successfully" Nov 4 23:55:36.391296 containerd[1607]: time="2025-11-04T23:55:36.391239225Z" level=info msg="StartContainer for \"342fc5cea22320cf133e70d941ceade1cd52e3582fb28e333ac11aacd1b1f31b\" returns successfully" Nov 4 23:55:36.410138 kubelet[2444]: E1104 23:55:36.409883 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:36.426677 containerd[1607]: time="2025-11-04T23:55:36.426613074Z" level=info msg="StartContainer for \"d4f749dc453917fe7410a867cbce722d802b584c5167f6aea69a5fae0ab60bad\" returns successfully" Nov 4 23:55:36.442469 kubelet[2444]: W1104 23:55:36.442317 2444 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8&limit=500&resourceVersion=0": dial tcp 10.128.0.112:6443: connect: connection refused Nov 4 23:55:36.443388 kubelet[2444]: E1104 23:55:36.442444 2444 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8&limit=500&resourceVersion=0\": dial tcp 10.128.0.112:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:55:36.960450 kubelet[2444]: I1104 23:55:36.960413 2444 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:37.419683 kubelet[2444]: E1104 23:55:37.419636 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:37.421620 kubelet[2444]: E1104 23:55:37.421582 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:37.422504 kubelet[2444]: E1104 23:55:37.422476 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:38.421299 kubelet[2444]: E1104 23:55:38.421240 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:38.422984 kubelet[2444]: E1104 23:55:38.421821 2444 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:39.968153 kubelet[2444]: E1104 23:55:39.968080 2444 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.030421 kubelet[2444]: I1104 23:55:40.030367 2444 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.030609 kubelet[2444]: E1104 23:55:40.030434 2444 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\": node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" Nov 4 23:55:40.129414 kubelet[2444]: I1104 23:55:40.129354 2444 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.141382 kubelet[2444]: E1104 23:55:40.141302 2444 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.141382 kubelet[2444]: I1104 23:55:40.141357 2444 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.146241 kubelet[2444]: E1104 23:55:40.146193 2444 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.146241 kubelet[2444]: I1104 23:55:40.146242 2444 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.153213 kubelet[2444]: E1104 23:55:40.153161 2444 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:40.310358 kubelet[2444]: I1104 23:55:40.310309 2444 apiserver.go:52] "Watching apiserver" Nov 4 23:55:40.333494 kubelet[2444]: I1104 23:55:40.333424 2444 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:55:40.533911 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 4 23:55:42.245333 systemd[1]: Reload requested from client PID 2719 ('systemctl') (unit session-7.scope)... Nov 4 23:55:42.245365 systemd[1]: Reloading... Nov 4 23:55:42.412326 zram_generator::config[2764]: No configuration found. Nov 4 23:55:42.757538 systemd[1]: Reloading finished in 511 ms. Nov 4 23:55:42.802592 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:42.824321 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:55:42.824749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:42.824859 systemd[1]: kubelet.service: Consumed 1.579s CPU time, 132M memory peak. Nov 4 23:55:42.827783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:43.191050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:43.204906 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:55:43.289329 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:43.289329 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:55:43.289329 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:43.289329 kubelet[2812]: I1104 23:55:43.288931 2812 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:55:43.307662 kubelet[2812]: I1104 23:55:43.307603 2812 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 23:55:43.307662 kubelet[2812]: I1104 23:55:43.307638 2812 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:55:43.309516 kubelet[2812]: I1104 23:55:43.308083 2812 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 23:55:43.310146 kubelet[2812]: I1104 23:55:43.310108 2812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 4 23:55:43.313179 kubelet[2812]: I1104 23:55:43.313119 2812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:55:43.320442 kubelet[2812]: I1104 23:55:43.320341 2812 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:55:43.326452 kubelet[2812]: I1104 23:55:43.325328 2812 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:55:43.326452 kubelet[2812]: I1104 23:55:43.325621 2812 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:55:43.326452 kubelet[2812]: I1104 23:55:43.325665 2812 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:55:43.326452 kubelet[2812]: I1104 23:55:43.325955 2812 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.325973 2812 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.326035 2812 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.326252 2812 kubelet.go:446] "Attempting to sync node with API server" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.326298 2812 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.326329 2812 kubelet.go:352] "Adding apiserver pod source" Nov 4 23:55:43.326809 kubelet[2812]: I1104 23:55:43.326344 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:55:43.338013 kubelet[2812]: I1104 23:55:43.335746 2812 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:55:43.338013 kubelet[2812]: I1104 23:55:43.336625 2812 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 23:55:43.340570 kubelet[2812]: I1104 23:55:43.339307 2812 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:55:43.340570 kubelet[2812]: I1104 23:55:43.339368 2812 server.go:1287] "Started kubelet" Nov 4 23:55:43.348402 kubelet[2812]: I1104 23:55:43.348360 2812 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:55:43.350756 kubelet[2812]: I1104 23:55:43.349540 2812 server.go:479] "Adding debug handlers to kubelet server" Nov 4 23:55:43.353290 kubelet[2812]: I1104 23:55:43.352148 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:55:43.354057 kubelet[2812]: I1104 23:55:43.354007 2812 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:55:43.358252 kubelet[2812]: I1104 23:55:43.357647 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:55:43.372035 kubelet[2812]: I1104 23:55:43.372000 2812 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:55:43.382622 kubelet[2812]: I1104 23:55:43.382583 2812 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:55:43.382974 kubelet[2812]: E1104 23:55:43.382946 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" not found" Nov 4 23:55:43.384305 kubelet[2812]: I1104 23:55:43.383642 2812 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:55:43.384305 kubelet[2812]: I1104 23:55:43.383797 2812 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:55:43.405194 kubelet[2812]: I1104 23:55:43.404246 2812 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:55:43.412423 kubelet[2812]: I1104 23:55:43.411031 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 23:55:43.413299 kubelet[2812]: I1104 23:55:43.413155 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 23:55:43.413299 kubelet[2812]: I1104 23:55:43.413202 2812 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 23:55:43.413299 kubelet[2812]: I1104 23:55:43.413228 2812 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:55:43.413299 kubelet[2812]: I1104 23:55:43.413239 2812 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 23:55:43.413945 kubelet[2812]: E1104 23:55:43.413358 2812 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:55:43.418293 kubelet[2812]: I1104 23:55:43.417309 2812 factory.go:221] Registration of the containerd container factory successfully Nov 4 23:55:43.418293 kubelet[2812]: I1104 23:55:43.417333 2812 factory.go:221] Registration of the systemd container factory successfully Nov 4 23:55:43.436015 kubelet[2812]: E1104 23:55:43.435451 2812 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:55:43.513053 kubelet[2812]: I1104 23:55:43.512995 2812 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:55:43.513053 kubelet[2812]: I1104 23:55:43.513021 2812 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:55:43.513053 kubelet[2812]: I1104 23:55:43.513047 2812 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:43.514909 kubelet[2812]: E1104 23:55:43.513426 2812 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513589 2812 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513608 2812 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513641 2812 policy_none.go:49] "None policy: Start" Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513657 2812 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513677 2812 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:55:43.514909 kubelet[2812]: I1104 23:55:43.513860 2812 state_mem.go:75] "Updated machine memory state" Nov 4 23:55:43.520721 kubelet[2812]: I1104 23:55:43.520666 2812 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 23:55:43.520924 kubelet[2812]: I1104 23:55:43.520886 2812 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:55:43.521009 kubelet[2812]: I1104 23:55:43.520908 2812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:55:43.523294 kubelet[2812]: I1104 23:55:43.522868 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:55:43.525458 kubelet[2812]: E1104 23:55:43.524806 2812 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:55:43.652827 kubelet[2812]: I1104 23:55:43.652792 2812 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.667946 kubelet[2812]: I1104 23:55:43.667879 2812 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.668434 kubelet[2812]: I1104 23:55:43.668313 2812 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.714508 kubelet[2812]: I1104 23:55:43.714465 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.716004 kubelet[2812]: I1104 23:55:43.715027 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.718010 kubelet[2812]: I1104 23:55:43.717640 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.726755 kubelet[2812]: W1104 23:55:43.726701 2812 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 4 23:55:43.728608 kubelet[2812]: W1104 23:55:43.728365 2812 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 4 23:55:43.732305 kubelet[2812]: W1104 23:55:43.730940 2812 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 4 23:55:43.785819 kubelet[2812]: I1104 23:55:43.785564 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-ca-certs\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.785819 kubelet[2812]: I1104 23:55:43.785624 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.785819 kubelet[2812]: I1104 23:55:43.785659 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.785819 kubelet[2812]: I1104 23:55:43.785698 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.786705 kubelet[2812]: I1104 23:55:43.785727 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68471c8acaba133a1dd3ed743d8e35bf-kubeconfig\") pod \"kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"68471c8acaba133a1dd3ed743d8e35bf\") " pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.786705 kubelet[2812]: I1104 23:55:43.785781 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-k8s-certs\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.786705 kubelet[2812]: I1104 23:55:43.785811 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.786705 kubelet[2812]: I1104 23:55:43.785841 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3b5ada23b63dae1a634c984f31b2613-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"c3b5ada23b63dae1a634c984f31b2613\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:43.786897 kubelet[2812]: I1104 23:55:43.785871 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f0da884f6fe84d5cd11abf61ec5cc89-ca-certs\") pod \"kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" (UID: \"8f0da884f6fe84d5cd11abf61ec5cc89\") " pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:44.328090 kubelet[2812]: I1104 23:55:44.328040 2812 apiserver.go:52] "Watching apiserver" Nov 4 23:55:44.384672 kubelet[2812]: I1104 23:55:44.384612 2812 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:55:44.485569 kubelet[2812]: I1104 23:55:44.485512 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:44.503607 kubelet[2812]: W1104 23:55:44.503546 2812 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 4 23:55:44.503793 kubelet[2812]: E1104 23:55:44.503643 2812 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" already exists" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:55:44.526317 kubelet[2812]: I1104 23:55:44.526194 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" podStartSLOduration=1.52614099 podStartE2EDuration="1.52614099s" podCreationTimestamp="2025-11-04 23:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:44.526132926 +0000 UTC m=+1.314316742" watchObservedRunningTime="2025-11-04 23:55:44.52614099 +0000 UTC m=+1.314324785" Nov 4 23:55:44.561886 kubelet[2812]: I1104 23:55:44.561809 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" podStartSLOduration=1.561783767 podStartE2EDuration="1.561783767s" podCreationTimestamp="2025-11-04 23:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:44.54382077 +0000 UTC m=+1.332004586" watchObservedRunningTime="2025-11-04 23:55:44.561783767 +0000 UTC m=+1.349967586" Nov 4 23:55:44.581734 kubelet[2812]: I1104 23:55:44.580879 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" podStartSLOduration=1.5808358930000002 podStartE2EDuration="1.580835893s" podCreationTimestamp="2025-11-04 23:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:44.563512672 +0000 UTC m=+1.351696489" watchObservedRunningTime="2025-11-04 23:55:44.580835893 +0000 UTC m=+1.369019715" Nov 4 23:55:48.715286 kubelet[2812]: I1104 23:55:48.715223 2812 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:55:48.716339 containerd[1607]: time="2025-11-04T23:55:48.716238254Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:55:48.716851 kubelet[2812]: I1104 23:55:48.716831 2812 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:55:49.623812 kubelet[2812]: I1104 23:55:49.623609 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59a48a91-f33c-4526-a7d2-130a54cafd74-xtables-lock\") pod \"kube-proxy-zppdx\" (UID: \"59a48a91-f33c-4526-a7d2-130a54cafd74\") " pod="kube-system/kube-proxy-zppdx" Nov 4 23:55:49.623812 kubelet[2812]: I1104 23:55:49.623664 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59a48a91-f33c-4526-a7d2-130a54cafd74-kube-proxy\") pod \"kube-proxy-zppdx\" (UID: \"59a48a91-f33c-4526-a7d2-130a54cafd74\") " pod="kube-system/kube-proxy-zppdx" Nov 4 23:55:49.623812 kubelet[2812]: I1104 23:55:49.623697 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59a48a91-f33c-4526-a7d2-130a54cafd74-lib-modules\") pod \"kube-proxy-zppdx\" (UID: \"59a48a91-f33c-4526-a7d2-130a54cafd74\") " pod="kube-system/kube-proxy-zppdx" Nov 4 23:55:49.623812 kubelet[2812]: I1104 23:55:49.623729 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqkfg\" (UniqueName: \"kubernetes.io/projected/59a48a91-f33c-4526-a7d2-130a54cafd74-kube-api-access-jqkfg\") pod \"kube-proxy-zppdx\" (UID: \"59a48a91-f33c-4526-a7d2-130a54cafd74\") " pod="kube-system/kube-proxy-zppdx" Nov 4 23:55:49.625285 systemd[1]: Created slice kubepods-besteffort-pod59a48a91_f33c_4526_a7d2_130a54cafd74.slice - libcontainer container kubepods-besteffort-pod59a48a91_f33c_4526_a7d2_130a54cafd74.slice. Nov 4 23:55:49.852256 systemd[1]: Created slice kubepods-besteffort-pod556656e8_ce20_4e75_864a_540c3c6886ed.slice - libcontainer container kubepods-besteffort-pod556656e8_ce20_4e75_864a_540c3c6886ed.slice. Nov 4 23:55:49.925861 kubelet[2812]: I1104 23:55:49.925688 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/556656e8-ce20-4e75-864a-540c3c6886ed-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g6jh4\" (UID: \"556656e8-ce20-4e75-864a-540c3c6886ed\") " pod="tigera-operator/tigera-operator-7dcd859c48-g6jh4" Nov 4 23:55:49.925861 kubelet[2812]: I1104 23:55:49.925753 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79h9\" (UniqueName: \"kubernetes.io/projected/556656e8-ce20-4e75-864a-540c3c6886ed-kube-api-access-x79h9\") pod \"tigera-operator-7dcd859c48-g6jh4\" (UID: \"556656e8-ce20-4e75-864a-540c3c6886ed\") " pod="tigera-operator/tigera-operator-7dcd859c48-g6jh4" Nov 4 23:55:49.938892 containerd[1607]: time="2025-11-04T23:55:49.938821433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zppdx,Uid:59a48a91-f33c-4526-a7d2-130a54cafd74,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:49.968960 containerd[1607]: time="2025-11-04T23:55:49.968891011Z" level=info msg="connecting to shim 39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d" address="unix:///run/containerd/s/21de2531530e0e0f6956a6bcc3f80da38b1b814950c073b4160b6646361c65f4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:50.017685 systemd[1]: Started cri-containerd-39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d.scope - libcontainer container 39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d. Nov 4 23:55:50.071499 containerd[1607]: time="2025-11-04T23:55:50.071348528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zppdx,Uid:59a48a91-f33c-4526-a7d2-130a54cafd74,Namespace:kube-system,Attempt:0,} returns sandbox id \"39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d\"" Nov 4 23:55:50.076356 containerd[1607]: time="2025-11-04T23:55:50.076308523Z" level=info msg="CreateContainer within sandbox \"39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:55:50.094060 containerd[1607]: time="2025-11-04T23:55:50.094003714Z" level=info msg="Container eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:50.106963 containerd[1607]: time="2025-11-04T23:55:50.106897087Z" level=info msg="CreateContainer within sandbox \"39df22bf6032784ca623c57e589099f503473acdf00e5d37287c8751a3afcc1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b\"" Nov 4 23:55:50.108766 containerd[1607]: time="2025-11-04T23:55:50.108644026Z" level=info msg="StartContainer for \"eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b\"" Nov 4 23:55:50.111141 containerd[1607]: time="2025-11-04T23:55:50.111107085Z" level=info msg="connecting to shim eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b" address="unix:///run/containerd/s/21de2531530e0e0f6956a6bcc3f80da38b1b814950c073b4160b6646361c65f4" protocol=ttrpc version=3 Nov 4 23:55:50.138572 systemd[1]: Started cri-containerd-eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b.scope - libcontainer container eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b. Nov 4 23:55:50.160870 containerd[1607]: time="2025-11-04T23:55:50.160541064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g6jh4,Uid:556656e8-ce20-4e75-864a-540c3c6886ed,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:55:50.196157 containerd[1607]: time="2025-11-04T23:55:50.194238008Z" level=info msg="connecting to shim 0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f" address="unix:///run/containerd/s/f162b96a2400d32e28b6c10f5be7ef569386e0dd643f6a7f3eda2a4e88b2f9a1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:50.223253 containerd[1607]: time="2025-11-04T23:55:50.223203911Z" level=info msg="StartContainer for \"eb38c81fd517a69c68f56f0f7d24c063a8408c7e865e59484de8082b78d3551b\" returns successfully" Nov 4 23:55:50.249225 systemd[1]: Started cri-containerd-0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f.scope - libcontainer container 0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f. Nov 4 23:55:50.342128 containerd[1607]: time="2025-11-04T23:55:50.342058186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g6jh4,Uid:556656e8-ce20-4e75-864a-540c3c6886ed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f\"" Nov 4 23:55:50.346098 containerd[1607]: time="2025-11-04T23:55:50.346034133Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:55:50.754530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649881639.mount: Deactivated successfully. Nov 4 23:55:51.198528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506151160.mount: Deactivated successfully. Nov 4 23:55:52.240564 containerd[1607]: time="2025-11-04T23:55:52.240490993Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.241939 containerd[1607]: time="2025-11-04T23:55:52.241869892Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:55:52.243831 containerd[1607]: time="2025-11-04T23:55:52.243745988Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.247191 containerd[1607]: time="2025-11-04T23:55:52.247092914Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:52.248068 containerd[1607]: time="2025-11-04T23:55:52.247906968Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.901522965s" Nov 4 23:55:52.248068 containerd[1607]: time="2025-11-04T23:55:52.247950661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:55:52.252583 containerd[1607]: time="2025-11-04T23:55:52.252535879Z" level=info msg="CreateContainer within sandbox \"0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:55:52.264990 containerd[1607]: time="2025-11-04T23:55:52.264935871Z" level=info msg="Container 9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:52.281425 containerd[1607]: time="2025-11-04T23:55:52.281363353Z" level=info msg="CreateContainer within sandbox \"0b830dafb5413a437dbcb2f6c41b82b566375de5732c0a54f2b1baffd6ba965f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8\"" Nov 4 23:55:52.282203 containerd[1607]: time="2025-11-04T23:55:52.282156381Z" level=info msg="StartContainer for \"9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8\"" Nov 4 23:55:52.283747 containerd[1607]: time="2025-11-04T23:55:52.283704253Z" level=info msg="connecting to shim 9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8" address="unix:///run/containerd/s/f162b96a2400d32e28b6c10f5be7ef569386e0dd643f6a7f3eda2a4e88b2f9a1" protocol=ttrpc version=3 Nov 4 23:55:52.324565 systemd[1]: Started cri-containerd-9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8.scope - libcontainer container 9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8. Nov 4 23:55:52.372894 containerd[1607]: time="2025-11-04T23:55:52.372769460Z" level=info msg="StartContainer for \"9ae42f548012a3897edb1349c456ddf27c5561583651c26f3426fa4f1f183ed8\" returns successfully" Nov 4 23:55:52.520160 kubelet[2812]: I1104 23:55:52.520032 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zppdx" podStartSLOduration=3.520006477 podStartE2EDuration="3.520006477s" podCreationTimestamp="2025-11-04 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:50.516401575 +0000 UTC m=+7.304585406" watchObservedRunningTime="2025-11-04 23:55:52.520006477 +0000 UTC m=+9.308190293" Nov 4 23:55:54.374239 kubelet[2812]: I1104 23:55:54.374130 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g6jh4" podStartSLOduration=3.468976055 podStartE2EDuration="5.374103227s" podCreationTimestamp="2025-11-04 23:55:49 +0000 UTC" firstStartedPulling="2025-11-04 23:55:50.344253574 +0000 UTC m=+7.132437373" lastFinishedPulling="2025-11-04 23:55:52.249380746 +0000 UTC m=+9.037564545" observedRunningTime="2025-11-04 23:55:52.521392259 +0000 UTC m=+9.309576075" watchObservedRunningTime="2025-11-04 23:55:54.374103227 +0000 UTC m=+11.162287043" Nov 4 23:55:55.098183 update_engine[1585]: I20251104 23:55:55.097342 1585 update_attempter.cc:509] Updating boot flags... Nov 4 23:55:58.668957 sudo[1893]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:58.718326 sshd[1892]: Connection closed by 139.178.68.195 port 47572 Nov 4 23:55:58.719572 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:58.735924 systemd-logind[1581]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:55:58.738884 systemd[1]: sshd@6-10.128.0.112:22-139.178.68.195:47572.service: Deactivated successfully. Nov 4 23:55:58.749634 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:55:58.751540 systemd[1]: session-7.scope: Consumed 5.913s CPU time, 225.3M memory peak. Nov 4 23:55:58.759729 systemd-logind[1581]: Removed session 7. Nov 4 23:56:07.032635 systemd[1]: Created slice kubepods-besteffort-pod4c5fb260_d9b2_4753_865e_6be02dce33e6.slice - libcontainer container kubepods-besteffort-pod4c5fb260_d9b2_4753_865e_6be02dce33e6.slice. Nov 4 23:56:07.049651 kubelet[2812]: I1104 23:56:07.049435 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c5fb260-d9b2-4753-865e-6be02dce33e6-tigera-ca-bundle\") pod \"calico-typha-5c8b99db4f-2r2wc\" (UID: \"4c5fb260-d9b2-4753-865e-6be02dce33e6\") " pod="calico-system/calico-typha-5c8b99db4f-2r2wc" Nov 4 23:56:07.049651 kubelet[2812]: I1104 23:56:07.049528 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4c5fb260-d9b2-4753-865e-6be02dce33e6-typha-certs\") pod \"calico-typha-5c8b99db4f-2r2wc\" (UID: \"4c5fb260-d9b2-4753-865e-6be02dce33e6\") " pod="calico-system/calico-typha-5c8b99db4f-2r2wc" Nov 4 23:56:07.049651 kubelet[2812]: I1104 23:56:07.049563 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppgbf\" (UniqueName: \"kubernetes.io/projected/4c5fb260-d9b2-4753-865e-6be02dce33e6-kube-api-access-ppgbf\") pod \"calico-typha-5c8b99db4f-2r2wc\" (UID: \"4c5fb260-d9b2-4753-865e-6be02dce33e6\") " pod="calico-system/calico-typha-5c8b99db4f-2r2wc" Nov 4 23:56:07.211813 systemd[1]: Created slice kubepods-besteffort-pod60680e7e_c239_4e4a_a5bd_1428588b13aa.slice - libcontainer container kubepods-besteffort-pod60680e7e_c239_4e4a_a5bd_1428588b13aa.slice. Nov 4 23:56:07.251983 kubelet[2812]: I1104 23:56:07.251879 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-cni-log-dir\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.251983 kubelet[2812]: I1104 23:56:07.252079 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-xtables-lock\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.252967 kubelet[2812]: I1104 23:56:07.252360 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60680e7e-c239-4e4a-a5bd-1428588b13aa-tigera-ca-bundle\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253662 kubelet[2812]: I1104 23:56:07.252886 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-var-lib-calico\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253662 kubelet[2812]: I1104 23:56:07.253201 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-policysync\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253662 kubelet[2812]: I1104 23:56:07.253249 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-var-run-calico\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253662 kubelet[2812]: I1104 23:56:07.253305 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-cni-bin-dir\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253662 kubelet[2812]: I1104 23:56:07.253337 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-cni-net-dir\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253920 kubelet[2812]: I1104 23:56:07.253364 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-flexvol-driver-host\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.253920 kubelet[2812]: I1104 23:56:07.253407 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60680e7e-c239-4e4a-a5bd-1428588b13aa-node-certs\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.255026 kubelet[2812]: I1104 23:56:07.254977 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60680e7e-c239-4e4a-a5bd-1428588b13aa-lib-modules\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.255247 kubelet[2812]: I1104 23:56:07.255192 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtx7h\" (UniqueName: \"kubernetes.io/projected/60680e7e-c239-4e4a-a5bd-1428588b13aa-kube-api-access-vtx7h\") pod \"calico-node-cd9nd\" (UID: \"60680e7e-c239-4e4a-a5bd-1428588b13aa\") " pod="calico-system/calico-node-cd9nd" Nov 4 23:56:07.293124 kubelet[2812]: E1104 23:56:07.292415 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:07.341298 containerd[1607]: time="2025-11-04T23:56:07.340327416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c8b99db4f-2r2wc,Uid:4c5fb260-d9b2-4753-865e-6be02dce33e6,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:07.356888 kubelet[2812]: I1104 23:56:07.356440 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b068e-ec25-488d-b894-ad5a0b2e8641-socket-dir\") pod \"csi-node-driver-4w4qr\" (UID: \"ad1b068e-ec25-488d-b894-ad5a0b2e8641\") " pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:07.356888 kubelet[2812]: I1104 23:56:07.356499 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnwq\" (UniqueName: \"kubernetes.io/projected/ad1b068e-ec25-488d-b894-ad5a0b2e8641-kube-api-access-flnwq\") pod \"csi-node-driver-4w4qr\" (UID: \"ad1b068e-ec25-488d-b894-ad5a0b2e8641\") " pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:07.356888 kubelet[2812]: I1104 23:56:07.356545 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b068e-ec25-488d-b894-ad5a0b2e8641-kubelet-dir\") pod \"csi-node-driver-4w4qr\" (UID: \"ad1b068e-ec25-488d-b894-ad5a0b2e8641\") " pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:07.359954 kubelet[2812]: I1104 23:56:07.358731 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad1b068e-ec25-488d-b894-ad5a0b2e8641-registration-dir\") pod \"csi-node-driver-4w4qr\" (UID: \"ad1b068e-ec25-488d-b894-ad5a0b2e8641\") " pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:07.364831 kubelet[2812]: E1104 23:56:07.361999 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.364831 kubelet[2812]: W1104 23:56:07.362059 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.364831 kubelet[2812]: E1104 23:56:07.364475 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.371735 kubelet[2812]: E1104 23:56:07.371388 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.371735 kubelet[2812]: W1104 23:56:07.371427 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.371735 kubelet[2812]: E1104 23:56:07.371546 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.373403 kubelet[2812]: E1104 23:56:07.373375 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.375380 kubelet[2812]: W1104 23:56:07.373769 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.375962 kubelet[2812]: E1104 23:56:07.375523 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.375962 kubelet[2812]: I1104 23:56:07.375582 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad1b068e-ec25-488d-b894-ad5a0b2e8641-varrun\") pod \"csi-node-driver-4w4qr\" (UID: \"ad1b068e-ec25-488d-b894-ad5a0b2e8641\") " pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:07.378878 kubelet[2812]: E1104 23:56:07.378500 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.378878 kubelet[2812]: W1104 23:56:07.378524 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.378878 kubelet[2812]: E1104 23:56:07.378572 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.379091 kubelet[2812]: E1104 23:56:07.378842 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.379840 kubelet[2812]: W1104 23:56:07.379094 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.379840 kubelet[2812]: E1104 23:56:07.379720 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.380487 kubelet[2812]: E1104 23:56:07.380392 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.380487 kubelet[2812]: W1104 23:56:07.380413 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.380487 kubelet[2812]: E1104 23:56:07.380450 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.381763 kubelet[2812]: E1104 23:56:07.381727 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.381873 kubelet[2812]: W1104 23:56:07.381792 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.382308 kubelet[2812]: E1104 23:56:07.382072 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.382308 kubelet[2812]: E1104 23:56:07.382158 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.382308 kubelet[2812]: W1104 23:56:07.382171 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.383017 kubelet[2812]: E1104 23:56:07.382508 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.383374 kubelet[2812]: E1104 23:56:07.383332 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.383374 kubelet[2812]: W1104 23:56:07.383352 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.383742 kubelet[2812]: E1104 23:56:07.383701 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.384860 kubelet[2812]: E1104 23:56:07.384732 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.384860 kubelet[2812]: W1104 23:56:07.384753 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.385204 kubelet[2812]: E1104 23:56:07.385177 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.386721 kubelet[2812]: E1104 23:56:07.386504 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.386721 kubelet[2812]: W1104 23:56:07.386526 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.386721 kubelet[2812]: E1104 23:56:07.386622 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.387115 kubelet[2812]: E1104 23:56:07.387096 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.387511 kubelet[2812]: W1104 23:56:07.387303 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.387767 kubelet[2812]: E1104 23:56:07.387742 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.388633 kubelet[2812]: E1104 23:56:07.388445 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.388633 kubelet[2812]: W1104 23:56:07.388465 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.389169 kubelet[2812]: E1104 23:56:07.388924 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.390565 kubelet[2812]: E1104 23:56:07.390415 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.390565 kubelet[2812]: W1104 23:56:07.390535 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.391056 kubelet[2812]: E1104 23:56:07.390894 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.392285 kubelet[2812]: E1104 23:56:07.391894 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.392285 kubelet[2812]: W1104 23:56:07.391914 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.392285 kubelet[2812]: E1104 23:56:07.391943 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.393804 kubelet[2812]: E1104 23:56:07.393362 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.393804 kubelet[2812]: W1104 23:56:07.393388 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.393804 kubelet[2812]: E1104 23:56:07.393408 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.394757 kubelet[2812]: E1104 23:56:07.394721 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.394757 kubelet[2812]: W1104 23:56:07.394756 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.395055 kubelet[2812]: E1104 23:56:07.394774 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.397687 kubelet[2812]: E1104 23:56:07.397363 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.397687 kubelet[2812]: W1104 23:56:07.397386 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.397687 kubelet[2812]: E1104 23:56:07.397405 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.398544 kubelet[2812]: E1104 23:56:07.398502 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.399668 kubelet[2812]: W1104 23:56:07.399333 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.399668 kubelet[2812]: E1104 23:56:07.399369 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.400371 kubelet[2812]: E1104 23:56:07.400345 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.400371 kubelet[2812]: W1104 23:56:07.400370 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.400936 kubelet[2812]: E1104 23:56:07.400389 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.403515 kubelet[2812]: E1104 23:56:07.402553 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.403515 kubelet[2812]: W1104 23:56:07.402576 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.403515 kubelet[2812]: E1104 23:56:07.402596 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.404380 containerd[1607]: time="2025-11-04T23:56:07.403890195Z" level=info msg="connecting to shim d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7" address="unix:///run/containerd/s/6ebccd51ab3ee40df112143aed383d316184acb6ed5c2b7ed43b2580a56381b3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:07.406442 kubelet[2812]: E1104 23:56:07.406417 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.406442 kubelet[2812]: W1104 23:56:07.406441 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.406653 kubelet[2812]: E1104 23:56:07.406466 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.418842 kubelet[2812]: E1104 23:56:07.418799 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.419518 kubelet[2812]: W1104 23:56:07.419154 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.422733 kubelet[2812]: E1104 23:56:07.422619 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.422733 kubelet[2812]: W1104 23:56:07.422654 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.424001 kubelet[2812]: E1104 23:56:07.423819 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.424001 kubelet[2812]: W1104 23:56:07.423845 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.426462 kubelet[2812]: E1104 23:56:07.425452 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.426462 kubelet[2812]: E1104 23:56:07.425739 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.426462 kubelet[2812]: E1104 23:56:07.425763 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.427314 kubelet[2812]: E1104 23:56:07.427036 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.427314 kubelet[2812]: W1104 23:56:07.427060 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.430674 kubelet[2812]: E1104 23:56:07.430377 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.430674 kubelet[2812]: W1104 23:56:07.430404 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.431725 kubelet[2812]: E1104 23:56:07.431579 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.431725 kubelet[2812]: E1104 23:56:07.431617 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.432052 kubelet[2812]: E1104 23:56:07.432032 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.433408 kubelet[2812]: W1104 23:56:07.433191 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.434457 kubelet[2812]: E1104 23:56:07.434346 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.434760 kubelet[2812]: E1104 23:56:07.434710 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.435002 kubelet[2812]: W1104 23:56:07.434865 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.436226 kubelet[2812]: E1104 23:56:07.436075 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.436226 kubelet[2812]: W1104 23:56:07.436095 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.437449 kubelet[2812]: E1104 23:56:07.437427 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.438026 kubelet[2812]: W1104 23:56:07.437861 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.438400 kubelet[2812]: E1104 23:56:07.438379 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.438677 kubelet[2812]: W1104 23:56:07.438536 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.438958 kubelet[2812]: E1104 23:56:07.438941 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.439345 kubelet[2812]: W1104 23:56:07.439070 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.439345 kubelet[2812]: E1104 23:56:07.439096 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.439345 kubelet[2812]: E1104 23:56:07.437567 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.440222 kubelet[2812]: E1104 23:56:07.440201 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.441417 kubelet[2812]: W1104 23:56:07.441263 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.441537 kubelet[2812]: E1104 23:56:07.441519 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.441659 kubelet[2812]: E1104 23:56:07.440769 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.441887 kubelet[2812]: E1104 23:56:07.440755 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.442695 kubelet[2812]: E1104 23:56:07.437583 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.443248 kubelet[2812]: E1104 23:56:07.443219 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.443918 kubelet[2812]: W1104 23:56:07.443627 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.443918 kubelet[2812]: E1104 23:56:07.443668 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.444798 kubelet[2812]: E1104 23:56:07.444762 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.445173 kubelet[2812]: W1104 23:56:07.445026 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.445383 kubelet[2812]: E1104 23:56:07.445303 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.464553 kubelet[2812]: E1104 23:56:07.462827 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.464553 kubelet[2812]: W1104 23:56:07.462860 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.464553 kubelet[2812]: E1104 23:56:07.462890 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.478468 kubelet[2812]: E1104 23:56:07.478407 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.478935 kubelet[2812]: W1104 23:56:07.478668 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.478935 kubelet[2812]: E1104 23:56:07.478716 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.479504 kubelet[2812]: E1104 23:56:07.479481 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.479670 kubelet[2812]: W1104 23:56:07.479647 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.479810 kubelet[2812]: E1104 23:56:07.479787 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.480666 kubelet[2812]: E1104 23:56:07.480638 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.480666 kubelet[2812]: W1104 23:56:07.480667 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.481182 kubelet[2812]: E1104 23:56:07.481153 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.481544 kubelet[2812]: E1104 23:56:07.481494 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.482918 kubelet[2812]: W1104 23:56:07.481630 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.482918 kubelet[2812]: E1104 23:56:07.482590 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.483291 kubelet[2812]: E1104 23:56:07.483194 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.483291 kubelet[2812]: W1104 23:56:07.483220 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.483905 kubelet[2812]: E1104 23:56:07.483861 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.484389 kubelet[2812]: E1104 23:56:07.484252 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.484389 kubelet[2812]: W1104 23:56:07.484295 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.484389 kubelet[2812]: E1104 23:56:07.484323 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.485011 kubelet[2812]: E1104 23:56:07.484993 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.485130 kubelet[2812]: W1104 23:56:07.485111 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.485320 kubelet[2812]: E1104 23:56:07.485300 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.487037 kubelet[2812]: E1104 23:56:07.486482 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.487037 kubelet[2812]: W1104 23:56:07.486503 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.487037 kubelet[2812]: E1104 23:56:07.486601 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.487037 kubelet[2812]: E1104 23:56:07.486928 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.487037 kubelet[2812]: W1104 23:56:07.486944 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.487663 kubelet[2812]: E1104 23:56:07.487446 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.487988 kubelet[2812]: E1104 23:56:07.487848 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.487988 kubelet[2812]: W1104 23:56:07.487874 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.488441 kubelet[2812]: E1104 23:56:07.488421 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.488563 kubelet[2812]: W1104 23:56:07.488543 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.489835 kubelet[2812]: E1104 23:56:07.489715 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.492432 kubelet[2812]: E1104 23:56:07.490042 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.492566 kubelet[2812]: W1104 23:56:07.492544 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.492976 kubelet[2812]: E1104 23:56:07.490055 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.493612 systemd[1]: Started cri-containerd-d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7.scope - libcontainer container d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7. Nov 4 23:56:07.494218 kubelet[2812]: E1104 23:56:07.493686 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.494218 kubelet[2812]: W1104 23:56:07.493702 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.498178 kubelet[2812]: E1104 23:56:07.498025 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.501495 kubelet[2812]: W1104 23:56:07.501454 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.509351 kubelet[2812]: E1104 23:56:07.509311 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.516016 kubelet[2812]: W1104 23:56:07.515975 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.516711 kubelet[2812]: E1104 23:56:07.516352 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.517219 kubelet[2812]: E1104 23:56:07.501419 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.517456 kubelet[2812]: E1104 23:56:07.501438 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.518646 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.523699 kubelet[2812]: W1104 23:56:07.519338 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.519374 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.515948 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.520505 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.523699 kubelet[2812]: W1104 23:56:07.520536 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.520561 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.522127 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.523699 kubelet[2812]: W1104 23:56:07.522146 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.523699 kubelet[2812]: E1104 23:56:07.522171 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.527704 containerd[1607]: time="2025-11-04T23:56:07.527655567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cd9nd,Uid:60680e7e-c239-4e4a-a5bd-1428588b13aa,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:07.531028 kubelet[2812]: E1104 23:56:07.530987 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.531357 kubelet[2812]: W1104 23:56:07.531319 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.531736 kubelet[2812]: E1104 23:56:07.531682 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.534050 kubelet[2812]: E1104 23:56:07.533450 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.534050 kubelet[2812]: W1104 23:56:07.533485 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.534050 kubelet[2812]: E1104 23:56:07.533509 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.534946 kubelet[2812]: E1104 23:56:07.534917 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.535912 kubelet[2812]: W1104 23:56:07.535885 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.537091 kubelet[2812]: E1104 23:56:07.536864 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.538039 kubelet[2812]: E1104 23:56:07.538019 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.538154 kubelet[2812]: W1104 23:56:07.538134 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.539233 kubelet[2812]: E1104 23:56:07.538219 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.539971 kubelet[2812]: E1104 23:56:07.539933 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.540187 kubelet[2812]: W1104 23:56:07.540157 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.540907 kubelet[2812]: E1104 23:56:07.540195 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.541408 kubelet[2812]: E1104 23:56:07.541218 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.541408 kubelet[2812]: W1104 23:56:07.541240 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.541771 kubelet[2812]: E1104 23:56:07.541262 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.542638 kubelet[2812]: E1104 23:56:07.542613 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.542638 kubelet[2812]: W1104 23:56:07.542637 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.542796 kubelet[2812]: E1104 23:56:07.542657 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.550925 kubelet[2812]: E1104 23:56:07.550781 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:07.550925 kubelet[2812]: W1104 23:56:07.550812 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:07.551474 kubelet[2812]: E1104 23:56:07.551250 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:07.572135 containerd[1607]: time="2025-11-04T23:56:07.572067663Z" level=info msg="connecting to shim adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190" address="unix:///run/containerd/s/9b3f0d6507027fd220a0572e7bd350032e1eeb415516834ec42e178d1b5930c7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:07.619550 systemd[1]: Started cri-containerd-adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190.scope - libcontainer container adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190. Nov 4 23:56:07.639386 containerd[1607]: time="2025-11-04T23:56:07.639317909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c8b99db4f-2r2wc,Uid:4c5fb260-d9b2-4753-865e-6be02dce33e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7\"" Nov 4 23:56:07.643301 containerd[1607]: time="2025-11-04T23:56:07.642995066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:56:07.681241 containerd[1607]: time="2025-11-04T23:56:07.681159624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cd9nd,Uid:60680e7e-c239-4e4a-a5bd-1428588b13aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\"" Nov 4 23:56:08.510061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211468233.mount: Deactivated successfully. Nov 4 23:56:09.415030 kubelet[2812]: E1104 23:56:09.414873 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:09.876399 containerd[1607]: time="2025-11-04T23:56:09.875707762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.877491 containerd[1607]: time="2025-11-04T23:56:09.877431678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:56:09.878614 containerd[1607]: time="2025-11-04T23:56:09.878531290Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.881709 containerd[1607]: time="2025-11-04T23:56:09.881636147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.882948 containerd[1607]: time="2025-11-04T23:56:09.882501206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.238997534s" Nov 4 23:56:09.882948 containerd[1607]: time="2025-11-04T23:56:09.882548959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:56:09.884584 containerd[1607]: time="2025-11-04T23:56:09.884545790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:56:09.914852 containerd[1607]: time="2025-11-04T23:56:09.913467781Z" level=info msg="CreateContainer within sandbox \"d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:56:09.929311 containerd[1607]: time="2025-11-04T23:56:09.928601499Z" level=info msg="Container 1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:09.950323 containerd[1607]: time="2025-11-04T23:56:09.950240149Z" level=info msg="CreateContainer within sandbox \"d8e85c1f28b8eb34221e2b3bd87b1bac3bd4b1dd0281c74619ebe8692c50fcd7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead\"" Nov 4 23:56:09.954958 containerd[1607]: time="2025-11-04T23:56:09.954433108Z" level=info msg="StartContainer for \"1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead\"" Nov 4 23:56:09.958981 containerd[1607]: time="2025-11-04T23:56:09.958539203Z" level=info msg="connecting to shim 1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead" address="unix:///run/containerd/s/6ebccd51ab3ee40df112143aed383d316184acb6ed5c2b7ed43b2580a56381b3" protocol=ttrpc version=3 Nov 4 23:56:09.996608 systemd[1]: Started cri-containerd-1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead.scope - libcontainer container 1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead. Nov 4 23:56:10.071805 containerd[1607]: time="2025-11-04T23:56:10.071656065Z" level=info msg="StartContainer for \"1301c9b89a192f9bdc3d3074665e45471637573a1981e1f7f3f5e88d8fc8aead\" returns successfully" Nov 4 23:56:10.662810 kubelet[2812]: E1104 23:56:10.662703 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.662810 kubelet[2812]: W1104 23:56:10.662760 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.664128 kubelet[2812]: E1104 23:56:10.663490 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.664128 kubelet[2812]: E1104 23:56:10.663949 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.664128 kubelet[2812]: W1104 23:56:10.663991 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.664128 kubelet[2812]: E1104 23:56:10.664018 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.664771 kubelet[2812]: E1104 23:56:10.664740 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.664959 kubelet[2812]: W1104 23:56:10.664890 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.664959 kubelet[2812]: E1104 23:56:10.664917 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.665674 kubelet[2812]: E1104 23:56:10.665498 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.665674 kubelet[2812]: W1104 23:56:10.665521 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.665674 kubelet[2812]: E1104 23:56:10.665538 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.666082 kubelet[2812]: E1104 23:56:10.665980 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.666082 kubelet[2812]: W1104 23:56:10.666000 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.666082 kubelet[2812]: E1104 23:56:10.666018 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.666605 kubelet[2812]: E1104 23:56:10.666588 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.666728 kubelet[2812]: W1104 23:56:10.666712 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.666829 kubelet[2812]: E1104 23:56:10.666812 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.667310 kubelet[2812]: E1104 23:56:10.667200 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.667310 kubelet[2812]: W1104 23:56:10.667218 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.667310 kubelet[2812]: E1104 23:56:10.667236 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.667935 kubelet[2812]: E1104 23:56:10.667811 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.667935 kubelet[2812]: W1104 23:56:10.667829 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.667935 kubelet[2812]: E1104 23:56:10.667846 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.668533 kubelet[2812]: E1104 23:56:10.668373 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.668533 kubelet[2812]: W1104 23:56:10.668392 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.668533 kubelet[2812]: E1104 23:56:10.668409 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.669099 kubelet[2812]: E1104 23:56:10.668992 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.669099 kubelet[2812]: W1104 23:56:10.669012 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.669099 kubelet[2812]: E1104 23:56:10.669031 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.669675 kubelet[2812]: E1104 23:56:10.669558 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.669675 kubelet[2812]: W1104 23:56:10.669577 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.669675 kubelet[2812]: E1104 23:56:10.669597 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.670301 kubelet[2812]: E1104 23:56:10.670190 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.670301 kubelet[2812]: W1104 23:56:10.670211 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.670301 kubelet[2812]: E1104 23:56:10.670229 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.670867 kubelet[2812]: E1104 23:56:10.670766 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.670867 kubelet[2812]: W1104 23:56:10.670786 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.670867 kubelet[2812]: E1104 23:56:10.670803 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.671435 kubelet[2812]: E1104 23:56:10.671335 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.671435 kubelet[2812]: W1104 23:56:10.671355 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.671435 kubelet[2812]: E1104 23:56:10.671372 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.672016 kubelet[2812]: E1104 23:56:10.671900 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.672016 kubelet[2812]: W1104 23:56:10.671921 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.672016 kubelet[2812]: E1104 23:56:10.671939 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.743288 kubelet[2812]: E1104 23:56:10.743194 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.743288 kubelet[2812]: W1104 23:56:10.743231 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.743543 kubelet[2812]: E1104 23:56:10.743262 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.744152 kubelet[2812]: E1104 23:56:10.744127 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.744152 kubelet[2812]: W1104 23:56:10.744153 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.744508 kubelet[2812]: E1104 23:56:10.744183 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.745207 kubelet[2812]: E1104 23:56:10.745125 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.745207 kubelet[2812]: W1104 23:56:10.745158 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.745956 kubelet[2812]: E1104 23:56:10.745330 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.745956 kubelet[2812]: E1104 23:56:10.745595 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.745956 kubelet[2812]: W1104 23:56:10.745610 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.745956 kubelet[2812]: E1104 23:56:10.745647 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.746737 kubelet[2812]: E1104 23:56:10.745961 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.746737 kubelet[2812]: W1104 23:56:10.745975 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.746737 kubelet[2812]: E1104 23:56:10.746011 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.748179 kubelet[2812]: E1104 23:56:10.747509 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.748179 kubelet[2812]: W1104 23:56:10.747529 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.748179 kubelet[2812]: E1104 23:56:10.747555 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.748708 kubelet[2812]: E1104 23:56:10.748664 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.748708 kubelet[2812]: W1104 23:56:10.748686 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.749334 kubelet[2812]: E1104 23:56:10.749142 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.750443 kubelet[2812]: E1104 23:56:10.750322 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.751352 kubelet[2812]: W1104 23:56:10.750765 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.751352 kubelet[2812]: E1104 23:56:10.751211 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.751785 kubelet[2812]: E1104 23:56:10.751747 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.751785 kubelet[2812]: W1104 23:56:10.751765 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.752092 kubelet[2812]: E1104 23:56:10.752050 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.753249 kubelet[2812]: E1104 23:56:10.753222 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.753249 kubelet[2812]: W1104 23:56:10.753245 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.753471 kubelet[2812]: E1104 23:56:10.753413 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.754143 kubelet[2812]: E1104 23:56:10.754098 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.754143 kubelet[2812]: W1104 23:56:10.754118 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.754402 kubelet[2812]: E1104 23:56:10.754257 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.754787 kubelet[2812]: E1104 23:56:10.754746 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.754787 kubelet[2812]: W1104 23:56:10.754765 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.755117 kubelet[2812]: E1104 23:56:10.755085 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.755574 kubelet[2812]: E1104 23:56:10.755512 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.755574 kubelet[2812]: W1104 23:56:10.755530 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.755882 kubelet[2812]: E1104 23:56:10.755661 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.756446 kubelet[2812]: E1104 23:56:10.756405 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.756446 kubelet[2812]: W1104 23:56:10.756423 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.756988 kubelet[2812]: E1104 23:56:10.756713 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.757480 kubelet[2812]: E1104 23:56:10.757461 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.757480 kubelet[2812]: W1104 23:56:10.757575 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.757885 kubelet[2812]: E1104 23:56:10.757734 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.758501 kubelet[2812]: E1104 23:56:10.758392 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.758501 kubelet[2812]: W1104 23:56:10.758434 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.758501 kubelet[2812]: E1104 23:56:10.758456 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.759503 kubelet[2812]: E1104 23:56:10.759484 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.759810 kubelet[2812]: W1104 23:56:10.759623 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.759810 kubelet[2812]: E1104 23:56:10.759663 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.760388 kubelet[2812]: E1104 23:56:10.760350 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:10.760553 kubelet[2812]: W1104 23:56:10.760369 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:10.760553 kubelet[2812]: E1104 23:56:10.760506 2812 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:10.802251 containerd[1607]: time="2025-11-04T23:56:10.802182250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:10.803950 containerd[1607]: time="2025-11-04T23:56:10.803901342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:56:10.805234 containerd[1607]: time="2025-11-04T23:56:10.805180888Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:10.809295 containerd[1607]: time="2025-11-04T23:56:10.808291031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:10.809528 containerd[1607]: time="2025-11-04T23:56:10.809490763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 924.896347ms" Nov 4 23:56:10.809697 containerd[1607]: time="2025-11-04T23:56:10.809669225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:56:10.814300 containerd[1607]: time="2025-11-04T23:56:10.814224851Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:56:10.826987 containerd[1607]: time="2025-11-04T23:56:10.826906053Z" level=info msg="Container 19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:10.837121 containerd[1607]: time="2025-11-04T23:56:10.837052452Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\"" Nov 4 23:56:10.838619 containerd[1607]: time="2025-11-04T23:56:10.838549186Z" level=info msg="StartContainer for \"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\"" Nov 4 23:56:10.842114 containerd[1607]: time="2025-11-04T23:56:10.842064899Z" level=info msg="connecting to shim 19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a" address="unix:///run/containerd/s/9b3f0d6507027fd220a0572e7bd350032e1eeb415516834ec42e178d1b5930c7" protocol=ttrpc version=3 Nov 4 23:56:10.872542 systemd[1]: Started cri-containerd-19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a.scope - libcontainer container 19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a. Nov 4 23:56:10.944417 containerd[1607]: time="2025-11-04T23:56:10.941553771Z" level=info msg="StartContainer for \"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\" returns successfully" Nov 4 23:56:10.964666 systemd[1]: cri-containerd-19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a.scope: Deactivated successfully. Nov 4 23:56:10.971516 containerd[1607]: time="2025-11-04T23:56:10.971410002Z" level=info msg="received exit event container_id:\"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\" id:\"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\" pid:3494 exited_at:{seconds:1762300570 nanos:970673881}" Nov 4 23:56:10.971516 containerd[1607]: time="2025-11-04T23:56:10.971471802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\" id:\"19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a\" pid:3494 exited_at:{seconds:1762300570 nanos:970673881}" Nov 4 23:56:11.012431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19339e93d341f01f96ed429871096a7a99d72e65354d8dd53abccaa7d55fff6a-rootfs.mount: Deactivated successfully. Nov 4 23:56:11.414194 kubelet[2812]: E1104 23:56:11.414123 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:11.590287 kubelet[2812]: I1104 23:56:11.590210 2812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:56:11.614957 kubelet[2812]: I1104 23:56:11.614821 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c8b99db4f-2r2wc" podStartSLOduration=3.373341625 podStartE2EDuration="5.614794873s" podCreationTimestamp="2025-11-04 23:56:06 +0000 UTC" firstStartedPulling="2025-11-04 23:56:07.642432364 +0000 UTC m=+24.430616164" lastFinishedPulling="2025-11-04 23:56:09.883885607 +0000 UTC m=+26.672069412" observedRunningTime="2025-11-04 23:56:10.607709676 +0000 UTC m=+27.395893490" watchObservedRunningTime="2025-11-04 23:56:11.614794873 +0000 UTC m=+28.402978691" Nov 4 23:56:12.597165 containerd[1607]: time="2025-11-04T23:56:12.597113442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:56:13.417292 kubelet[2812]: E1104 23:56:13.417090 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:15.418345 kubelet[2812]: E1104 23:56:15.414786 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:15.821941 containerd[1607]: time="2025-11-04T23:56:15.821883913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:15.823566 containerd[1607]: time="2025-11-04T23:56:15.823518457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:56:15.824736 containerd[1607]: time="2025-11-04T23:56:15.824697390Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:15.828759 containerd[1607]: time="2025-11-04T23:56:15.828714446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:15.830299 containerd[1607]: time="2025-11-04T23:56:15.829966889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.232719661s" Nov 4 23:56:15.830299 containerd[1607]: time="2025-11-04T23:56:15.830010903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:56:15.834737 containerd[1607]: time="2025-11-04T23:56:15.834700668Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:56:15.849863 containerd[1607]: time="2025-11-04T23:56:15.849812575Z" level=info msg="Container 7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:15.858978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106704260.mount: Deactivated successfully. Nov 4 23:56:15.866751 containerd[1607]: time="2025-11-04T23:56:15.866686424Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\"" Nov 4 23:56:15.867589 containerd[1607]: time="2025-11-04T23:56:15.867504593Z" level=info msg="StartContainer for \"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\"" Nov 4 23:56:15.870175 containerd[1607]: time="2025-11-04T23:56:15.870120193Z" level=info msg="connecting to shim 7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d" address="unix:///run/containerd/s/9b3f0d6507027fd220a0572e7bd350032e1eeb415516834ec42e178d1b5930c7" protocol=ttrpc version=3 Nov 4 23:56:15.911518 systemd[1]: Started cri-containerd-7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d.scope - libcontainer container 7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d. Nov 4 23:56:15.979245 containerd[1607]: time="2025-11-04T23:56:15.979110627Z" level=info msg="StartContainer for \"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\" returns successfully" Nov 4 23:56:17.037459 containerd[1607]: time="2025-11-04T23:56:17.037377868Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:56:17.040702 systemd[1]: cri-containerd-7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d.scope: Deactivated successfully. Nov 4 23:56:17.041297 systemd[1]: cri-containerd-7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d.scope: Consumed 690ms CPU time, 199.8M memory peak, 171.3M written to disk. Nov 4 23:56:17.044974 containerd[1607]: time="2025-11-04T23:56:17.044868942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\" id:\"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\" pid:3554 exited_at:{seconds:1762300577 nanos:44098165}" Nov 4 23:56:17.044974 containerd[1607]: time="2025-11-04T23:56:17.044945225Z" level=info msg="received exit event container_id:\"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\" id:\"7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d\" pid:3554 exited_at:{seconds:1762300577 nanos:44098165}" Nov 4 23:56:17.074976 kubelet[2812]: I1104 23:56:17.074837 2812 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:56:17.088259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e57460ebea7b0c6690a5a730a7a13c650c151cd3e3f307b857e1563af18781d-rootfs.mount: Deactivated successfully. Nov 4 23:56:17.147225 systemd[1]: Created slice kubepods-burstable-podee9f53df_c688_4c13_8b56_bd8cb9b0e064.slice - libcontainer container kubepods-burstable-podee9f53df_c688_4c13_8b56_bd8cb9b0e064.slice. Nov 4 23:56:17.164688 systemd[1]: Created slice kubepods-burstable-pod95c4ef41_4f67_4f60_8777_c0dce25ae7f4.slice - libcontainer container kubepods-burstable-pod95c4ef41_4f67_4f60_8777_c0dce25ae7f4.slice. Nov 4 23:56:17.192504 systemd[1]: Created slice kubepods-besteffort-pod196a06c5_2bf4_4f10_938e_eef198e9214f.slice - libcontainer container kubepods-besteffort-pod196a06c5_2bf4_4f10_938e_eef198e9214f.slice. Nov 4 23:56:17.213092 systemd[1]: Created slice kubepods-besteffort-podfb4aa393_f03a_4f04_a545_20b10128cfa9.slice - libcontainer container kubepods-besteffort-podfb4aa393_f03a_4f04_a545_20b10128cfa9.slice. Nov 4 23:56:17.223133 systemd[1]: Created slice kubepods-besteffort-pod0cfb7e2a_0604_407b_ae48_6da4047f5d80.slice - libcontainer container kubepods-besteffort-pod0cfb7e2a_0604_407b_ae48_6da4047f5d80.slice. Nov 4 23:56:17.226169 kubelet[2812]: W1104 23:56:17.226123 2812 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object Nov 4 23:56:17.226426 kubelet[2812]: E1104 23:56:17.226189 2812 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object" logger="UnhandledError" Nov 4 23:56:17.226426 kubelet[2812]: W1104 23:56:17.226295 2812 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object Nov 4 23:56:17.226426 kubelet[2812]: E1104 23:56:17.226319 2812 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object" logger="UnhandledError" Nov 4 23:56:17.226426 kubelet[2812]: W1104 23:56:17.226391 2812 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object Nov 4 23:56:17.226748 kubelet[2812]: E1104 23:56:17.226408 2812 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object" logger="UnhandledError" Nov 4 23:56:17.228295 kubelet[2812]: W1104 23:56:17.226890 2812 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object Nov 4 23:56:17.228295 kubelet[2812]: E1104 23:56:17.226926 2812 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object" logger="UnhandledError" Nov 4 23:56:17.228295 kubelet[2812]: W1104 23:56:17.227208 2812 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object Nov 4 23:56:17.228295 kubelet[2812]: E1104 23:56:17.227233 2812 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' and this object" logger="UnhandledError" Nov 4 23:56:17.238170 systemd[1]: Created slice kubepods-besteffort-pod747bb446_7545_4615_a310_0f2e7073dd98.slice - libcontainer container kubepods-besteffort-pod747bb446_7545_4615_a310_0f2e7073dd98.slice. Nov 4 23:56:17.256954 systemd[1]: Created slice kubepods-besteffort-pod15a4baa5_351d_4330_9bf0_0494048d0ffd.slice - libcontainer container kubepods-besteffort-pod15a4baa5_351d_4330_9bf0_0494048d0ffd.slice. Nov 4 23:56:17.339110 kubelet[2812]: I1104 23:56:17.296703 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m874f\" (UniqueName: \"kubernetes.io/projected/196a06c5-2bf4-4f10-938e-eef198e9214f-kube-api-access-m874f\") pod \"calico-kube-controllers-547c989ccf-8nsrm\" (UID: \"196a06c5-2bf4-4f10-938e-eef198e9214f\") " pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" Nov 4 23:56:17.339110 kubelet[2812]: I1104 23:56:17.296761 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4aa393-f03a-4f04-a545-20b10128cfa9-config\") pod \"goldmane-666569f655-x8fzw\" (UID: \"fb4aa393-f03a-4f04-a545-20b10128cfa9\") " pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:17.339110 kubelet[2812]: I1104 23:56:17.296798 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpzl8\" (UniqueName: \"kubernetes.io/projected/15a4baa5-351d-4330-9bf0-0494048d0ffd-kube-api-access-cpzl8\") pod \"calico-apiserver-5cd57c848d-bjqgd\" (UID: \"15a4baa5-351d-4330-9bf0-0494048d0ffd\") " pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" Nov 4 23:56:17.339110 kubelet[2812]: I1104 23:56:17.296824 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196a06c5-2bf4-4f10-938e-eef198e9214f-tigera-ca-bundle\") pod \"calico-kube-controllers-547c989ccf-8nsrm\" (UID: \"196a06c5-2bf4-4f10-938e-eef198e9214f\") " pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" Nov 4 23:56:17.339110 kubelet[2812]: I1104 23:56:17.296852 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747bb446-7545-4615-a310-0f2e7073dd98-whisker-ca-bundle\") pod \"whisker-846ccf58dd-2w8q7\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " pod="calico-system/whisker-846ccf58dd-2w8q7" Nov 4 23:56:17.339510 kubelet[2812]: I1104 23:56:17.296879 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxxb\" (UniqueName: \"kubernetes.io/projected/0cfb7e2a-0604-407b-ae48-6da4047f5d80-kube-api-access-rnxxb\") pod \"calico-apiserver-5cd57c848d-tc66q\" (UID: \"0cfb7e2a-0604-407b-ae48-6da4047f5d80\") " pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" Nov 4 23:56:17.339510 kubelet[2812]: I1104 23:56:17.296909 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dmq\" (UniqueName: \"kubernetes.io/projected/95c4ef41-4f67-4f60-8777-c0dce25ae7f4-kube-api-access-d7dmq\") pod \"coredns-668d6bf9bc-ftl8k\" (UID: \"95c4ef41-4f67-4f60-8777-c0dce25ae7f4\") " pod="kube-system/coredns-668d6bf9bc-ftl8k" Nov 4 23:56:17.339510 kubelet[2812]: I1104 23:56:17.296934 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwp5s\" (UniqueName: \"kubernetes.io/projected/fb4aa393-f03a-4f04-a545-20b10128cfa9-kube-api-access-qwp5s\") pod \"goldmane-666569f655-x8fzw\" (UID: \"fb4aa393-f03a-4f04-a545-20b10128cfa9\") " pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:17.339510 kubelet[2812]: I1104 23:56:17.296962 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0cfb7e2a-0604-407b-ae48-6da4047f5d80-calico-apiserver-certs\") pod \"calico-apiserver-5cd57c848d-tc66q\" (UID: \"0cfb7e2a-0604-407b-ae48-6da4047f5d80\") " pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" Nov 4 23:56:17.339510 kubelet[2812]: I1104 23:56:17.296997 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb4aa393-f03a-4f04-a545-20b10128cfa9-goldmane-ca-bundle\") pod \"goldmane-666569f655-x8fzw\" (UID: \"fb4aa393-f03a-4f04-a545-20b10128cfa9\") " pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:17.339761 kubelet[2812]: I1104 23:56:17.297029 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c4ef41-4f67-4f60-8777-c0dce25ae7f4-config-volume\") pod \"coredns-668d6bf9bc-ftl8k\" (UID: \"95c4ef41-4f67-4f60-8777-c0dce25ae7f4\") " pod="kube-system/coredns-668d6bf9bc-ftl8k" Nov 4 23:56:17.339761 kubelet[2812]: I1104 23:56:17.297056 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fb4aa393-f03a-4f04-a545-20b10128cfa9-goldmane-key-pair\") pod \"goldmane-666569f655-x8fzw\" (UID: \"fb4aa393-f03a-4f04-a545-20b10128cfa9\") " pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:17.339761 kubelet[2812]: I1104 23:56:17.297084 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/747bb446-7545-4615-a310-0f2e7073dd98-whisker-backend-key-pair\") pod \"whisker-846ccf58dd-2w8q7\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " pod="calico-system/whisker-846ccf58dd-2w8q7" Nov 4 23:56:17.339761 kubelet[2812]: I1104 23:56:17.297156 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhg6\" (UniqueName: \"kubernetes.io/projected/747bb446-7545-4615-a310-0f2e7073dd98-kube-api-access-jrhg6\") pod \"whisker-846ccf58dd-2w8q7\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " pod="calico-system/whisker-846ccf58dd-2w8q7" Nov 4 23:56:17.339761 kubelet[2812]: I1104 23:56:17.297188 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee9f53df-c688-4c13-8b56-bd8cb9b0e064-config-volume\") pod \"coredns-668d6bf9bc-zs7cl\" (UID: \"ee9f53df-c688-4c13-8b56-bd8cb9b0e064\") " pod="kube-system/coredns-668d6bf9bc-zs7cl" Nov 4 23:56:17.345093 kubelet[2812]: I1104 23:56:17.297211 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgzh\" (UniqueName: \"kubernetes.io/projected/ee9f53df-c688-4c13-8b56-bd8cb9b0e064-kube-api-access-wzgzh\") pod \"coredns-668d6bf9bc-zs7cl\" (UID: \"ee9f53df-c688-4c13-8b56-bd8cb9b0e064\") " pod="kube-system/coredns-668d6bf9bc-zs7cl" Nov 4 23:56:17.345093 kubelet[2812]: I1104 23:56:17.297246 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15a4baa5-351d-4330-9bf0-0494048d0ffd-calico-apiserver-certs\") pod \"calico-apiserver-5cd57c848d-bjqgd\" (UID: \"15a4baa5-351d-4330-9bf0-0494048d0ffd\") " pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" Nov 4 23:56:17.476844 systemd[1]: Created slice kubepods-besteffort-podad1b068e_ec25_488d_b894_ad5a0b2e8641.slice - libcontainer container kubepods-besteffort-podad1b068e_ec25_488d_b894_ad5a0b2e8641.slice. Nov 4 23:56:17.499787 containerd[1607]: time="2025-11-04T23:56:17.498425442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w4qr,Uid:ad1b068e-ec25-488d-b894-ad5a0b2e8641,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:17.509484 containerd[1607]: time="2025-11-04T23:56:17.509414803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547c989ccf-8nsrm,Uid:196a06c5-2bf4-4f10-938e-eef198e9214f,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:17.647248 containerd[1607]: time="2025-11-04T23:56:17.647044732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-bjqgd,Uid:15a4baa5-351d-4330-9bf0-0494048d0ffd,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:17.767597 containerd[1607]: time="2025-11-04T23:56:17.767522912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zs7cl,Uid:ee9f53df-c688-4c13-8b56-bd8cb9b0e064,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:17.785659 containerd[1607]: time="2025-11-04T23:56:17.785602152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftl8k,Uid:95c4ef41-4f67-4f60-8777-c0dce25ae7f4,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:17.786383 containerd[1607]: time="2025-11-04T23:56:17.786234309Z" level=error msg="Failed to destroy network for sandbox \"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.791494 containerd[1607]: time="2025-11-04T23:56:17.791384165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w4qr,Uid:ad1b068e-ec25-488d-b894-ad5a0b2e8641,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.792549 kubelet[2812]: E1104 23:56:17.792259 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.792549 kubelet[2812]: E1104 23:56:17.792428 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:17.792549 kubelet[2812]: E1104 23:56:17.792466 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4w4qr" Nov 4 23:56:17.792758 kubelet[2812]: E1104 23:56:17.792545 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe24a0c68b7c97534e9465f67ff847de8aec6da717fdb5fc07c86ecc7ef24415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:17.831616 containerd[1607]: time="2025-11-04T23:56:17.831229212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-tc66q,Uid:0cfb7e2a-0604-407b-ae48-6da4047f5d80,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:17.914848 containerd[1607]: time="2025-11-04T23:56:17.914674906Z" level=error msg="Failed to destroy network for sandbox \"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.919428 containerd[1607]: time="2025-11-04T23:56:17.918542268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547c989ccf-8nsrm,Uid:196a06c5-2bf4-4f10-938e-eef198e9214f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.919648 kubelet[2812]: E1104 23:56:17.918838 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.919648 kubelet[2812]: E1104 23:56:17.918922 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" Nov 4 23:56:17.919648 kubelet[2812]: E1104 23:56:17.918957 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" Nov 4 23:56:17.919836 kubelet[2812]: E1104 23:56:17.919026 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547c989ccf-8nsrm_calico-system(196a06c5-2bf4-4f10-938e-eef198e9214f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547c989ccf-8nsrm_calico-system(196a06c5-2bf4-4f10-938e-eef198e9214f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6c85c9f97f4687a222d715dcf0e13eb5fefae0d99993325003b02f67512179e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:56:17.920324 containerd[1607]: time="2025-11-04T23:56:17.920045452Z" level=error msg="Failed to destroy network for sandbox \"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.923567 containerd[1607]: time="2025-11-04T23:56:17.923431225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-bjqgd,Uid:15a4baa5-351d-4330-9bf0-0494048d0ffd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.924469 kubelet[2812]: E1104 23:56:17.923885 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.924469 kubelet[2812]: E1104 23:56:17.924379 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" Nov 4 23:56:17.924469 kubelet[2812]: E1104 23:56:17.924423 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" Nov 4 23:56:17.924932 kubelet[2812]: E1104 23:56:17.924843 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd57c848d-bjqgd_calico-apiserver(15a4baa5-351d-4330-9bf0-0494048d0ffd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd57c848d-bjqgd_calico-apiserver(15a4baa5-351d-4330-9bf0-0494048d0ffd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee8be8f1b9cd54c3744ea9b711fd77fd57ce62d10442e98a07aa2ce0235eb9df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:17.961614 containerd[1607]: time="2025-11-04T23:56:17.961511046Z" level=error msg="Failed to destroy network for sandbox \"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.964305 containerd[1607]: time="2025-11-04T23:56:17.964146796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zs7cl,Uid:ee9f53df-c688-4c13-8b56-bd8cb9b0e064,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.965205 kubelet[2812]: E1104 23:56:17.964451 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:17.965205 kubelet[2812]: E1104 23:56:17.964539 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zs7cl" Nov 4 23:56:17.965205 kubelet[2812]: E1104 23:56:17.964574 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zs7cl" Nov 4 23:56:17.965567 kubelet[2812]: E1104 23:56:17.964654 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zs7cl_kube-system(ee9f53df-c688-4c13-8b56-bd8cb9b0e064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zs7cl_kube-system(ee9f53df-c688-4c13-8b56-bd8cb9b0e064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b13859fe54cd5b530cbeeb2aa02a56edc40b19915ef7035bbcbc2f9aa5152cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zs7cl" podUID="ee9f53df-c688-4c13-8b56-bd8cb9b0e064" Nov 4 23:56:18.000455 containerd[1607]: time="2025-11-04T23:56:18.000388492Z" level=error msg="Failed to destroy network for sandbox \"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.003325 containerd[1607]: time="2025-11-04T23:56:18.003205647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftl8k,Uid:95c4ef41-4f67-4f60-8777-c0dce25ae7f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.005360 kubelet[2812]: E1104 23:56:18.003877 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.005360 kubelet[2812]: E1104 23:56:18.003958 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ftl8k" Nov 4 23:56:18.005360 kubelet[2812]: E1104 23:56:18.004000 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ftl8k" Nov 4 23:56:18.005647 kubelet[2812]: E1104 23:56:18.004063 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ftl8k_kube-system(95c4ef41-4f67-4f60-8777-c0dce25ae7f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ftl8k_kube-system(95c4ef41-4f67-4f60-8777-c0dce25ae7f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a111817708c428434b528abab81537e16fb78643543e6602177e97ca631a240\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ftl8k" podUID="95c4ef41-4f67-4f60-8777-c0dce25ae7f4" Nov 4 23:56:18.012556 containerd[1607]: time="2025-11-04T23:56:18.012493549Z" level=error msg="Failed to destroy network for sandbox \"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.014371 containerd[1607]: time="2025-11-04T23:56:18.014200866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-tc66q,Uid:0cfb7e2a-0604-407b-ae48-6da4047f5d80,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.014608 kubelet[2812]: E1104 23:56:18.014553 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.014702 kubelet[2812]: E1104 23:56:18.014639 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" Nov 4 23:56:18.014702 kubelet[2812]: E1104 23:56:18.014668 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" Nov 4 23:56:18.014805 kubelet[2812]: E1104 23:56:18.014733 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd57c848d-tc66q_calico-apiserver(0cfb7e2a-0604-407b-ae48-6da4047f5d80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd57c848d-tc66q_calico-apiserver(0cfb7e2a-0604-407b-ae48-6da4047f5d80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cdbf3da1595536a6ed5af1144fe46d28d09f3966fa6ab4c8d7c14a51548aebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:56:18.106630 kubelet[2812]: I1104 23:56:18.102990 2812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:56:18.422246 containerd[1607]: time="2025-11-04T23:56:18.422190087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8fzw,Uid:fb4aa393-f03a-4f04-a545-20b10128cfa9,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:18.499355 containerd[1607]: time="2025-11-04T23:56:18.499258914Z" level=error msg="Failed to destroy network for sandbox \"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.503520 containerd[1607]: time="2025-11-04T23:56:18.503417817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8fzw,Uid:fb4aa393-f03a-4f04-a545-20b10128cfa9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.504634 kubelet[2812]: E1104 23:56:18.504560 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.504780 kubelet[2812]: E1104 23:56:18.504649 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:18.504780 kubelet[2812]: E1104 23:56:18.504682 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x8fzw" Nov 4 23:56:18.504780 kubelet[2812]: E1104 23:56:18.504749 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x8fzw_calico-system(fb4aa393-f03a-4f04-a545-20b10128cfa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x8fzw_calico-system(fb4aa393-f03a-4f04-a545-20b10128cfa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f28980f3b269acb5a25cbbdd56870f3b697b5afaecf5621436cb18dc42d0fd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:56:18.507169 systemd[1]: run-netns-cni\x2ddd7a6b41\x2de3e6\x2d9542\x2d5dc7\x2d8df1686287a3.mount: Deactivated successfully. Nov 4 23:56:18.546651 containerd[1607]: time="2025-11-04T23:56:18.546597777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-846ccf58dd-2w8q7,Uid:747bb446-7545-4615-a310-0f2e7073dd98,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:18.620259 containerd[1607]: time="2025-11-04T23:56:18.620148789Z" level=error msg="Failed to destroy network for sandbox \"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.623450 containerd[1607]: time="2025-11-04T23:56:18.623394370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-846ccf58dd-2w8q7,Uid:747bb446-7545-4615-a310-0f2e7073dd98,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.624636 kubelet[2812]: E1104 23:56:18.624003 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:18.624636 kubelet[2812]: E1104 23:56:18.624065 2812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-846ccf58dd-2w8q7" Nov 4 23:56:18.624636 kubelet[2812]: E1104 23:56:18.624098 2812 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-846ccf58dd-2w8q7" Nov 4 23:56:18.624906 kubelet[2812]: E1104 23:56:18.624155 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-846ccf58dd-2w8q7_calico-system(747bb446-7545-4615-a310-0f2e7073dd98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-846ccf58dd-2w8q7_calico-system(747bb446-7545-4615-a310-0f2e7073dd98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dced97543e4b141e20fd7a5aef3fea92c80ab8e8b55adfaf9c5d014664556f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-846ccf58dd-2w8q7" podUID="747bb446-7545-4615-a310-0f2e7073dd98" Nov 4 23:56:18.633216 containerd[1607]: time="2025-11-04T23:56:18.633152415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:56:24.030424 systemd[1]: Started sshd@7-10.128.0.112:22-118.114.13.24:62482.service - OpenSSH per-connection server daemon (118.114.13.24:62482). Nov 4 23:56:25.517514 sshd[3812]: Received disconnect from 118.114.13.24 port 62482:11: Bye Bye [preauth] Nov 4 23:56:25.517514 sshd[3812]: Disconnected from authenticating user root 118.114.13.24 port 62482 [preauth] Nov 4 23:56:25.521122 systemd[1]: sshd@7-10.128.0.112:22-118.114.13.24:62482.service: Deactivated successfully. Nov 4 23:56:25.625813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220817428.mount: Deactivated successfully. Nov 4 23:56:25.663991 containerd[1607]: time="2025-11-04T23:56:25.663882930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:25.666225 containerd[1607]: time="2025-11-04T23:56:25.666161529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:56:25.668966 containerd[1607]: time="2025-11-04T23:56:25.668909694Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:25.676363 containerd[1607]: time="2025-11-04T23:56:25.674705208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:25.676542 containerd[1607]: time="2025-11-04T23:56:25.676473323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.043030615s" Nov 4 23:56:25.676542 containerd[1607]: time="2025-11-04T23:56:25.676516890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:56:25.693365 containerd[1607]: time="2025-11-04T23:56:25.691748472Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:56:25.715538 containerd[1607]: time="2025-11-04T23:56:25.715484552Z" level=info msg="Container 542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:25.734787 containerd[1607]: time="2025-11-04T23:56:25.734733031Z" level=info msg="CreateContainer within sandbox \"adbd832dc6418175fc84d7f5c2f6284fb3dea98cb377bc0715be92dfe6436190\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\"" Nov 4 23:56:25.735624 containerd[1607]: time="2025-11-04T23:56:25.735584799Z" level=info msg="StartContainer for \"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\"" Nov 4 23:56:25.739634 containerd[1607]: time="2025-11-04T23:56:25.739516404Z" level=info msg="connecting to shim 542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063" address="unix:///run/containerd/s/9b3f0d6507027fd220a0572e7bd350032e1eeb415516834ec42e178d1b5930c7" protocol=ttrpc version=3 Nov 4 23:56:25.770547 systemd[1]: Started cri-containerd-542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063.scope - libcontainer container 542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063. Nov 4 23:56:25.844316 containerd[1607]: time="2025-11-04T23:56:25.843283879Z" level=info msg="StartContainer for \"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\" returns successfully" Nov 4 23:56:25.966065 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:56:25.966339 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:56:26.283429 kubelet[2812]: I1104 23:56:26.283083 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747bb446-7545-4615-a310-0f2e7073dd98-whisker-ca-bundle\") pod \"747bb446-7545-4615-a310-0f2e7073dd98\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " Nov 4 23:56:26.285578 kubelet[2812]: I1104 23:56:26.284574 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/747bb446-7545-4615-a310-0f2e7073dd98-whisker-backend-key-pair\") pod \"747bb446-7545-4615-a310-0f2e7073dd98\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " Nov 4 23:56:26.285578 kubelet[2812]: I1104 23:56:26.284655 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhg6\" (UniqueName: \"kubernetes.io/projected/747bb446-7545-4615-a310-0f2e7073dd98-kube-api-access-jrhg6\") pod \"747bb446-7545-4615-a310-0f2e7073dd98\" (UID: \"747bb446-7545-4615-a310-0f2e7073dd98\") " Nov 4 23:56:26.286503 kubelet[2812]: I1104 23:56:26.286464 2812 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747bb446-7545-4615-a310-0f2e7073dd98-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "747bb446-7545-4615-a310-0f2e7073dd98" (UID: "747bb446-7545-4615-a310-0f2e7073dd98"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:26.295929 kubelet[2812]: I1104 23:56:26.295836 2812 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747bb446-7545-4615-a310-0f2e7073dd98-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "747bb446-7545-4615-a310-0f2e7073dd98" (UID: "747bb446-7545-4615-a310-0f2e7073dd98"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:56:26.296591 kubelet[2812]: I1104 23:56:26.296498 2812 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747bb446-7545-4615-a310-0f2e7073dd98-kube-api-access-jrhg6" (OuterVolumeSpecName: "kube-api-access-jrhg6") pod "747bb446-7545-4615-a310-0f2e7073dd98" (UID: "747bb446-7545-4615-a310-0f2e7073dd98"). InnerVolumeSpecName "kube-api-access-jrhg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:26.385450 kubelet[2812]: I1104 23:56:26.385380 2812 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/747bb446-7545-4615-a310-0f2e7073dd98-whisker-backend-key-pair\") on node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" DevicePath \"\"" Nov 4 23:56:26.385450 kubelet[2812]: I1104 23:56:26.385445 2812 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrhg6\" (UniqueName: \"kubernetes.io/projected/747bb446-7545-4615-a310-0f2e7073dd98-kube-api-access-jrhg6\") on node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" DevicePath \"\"" Nov 4 23:56:26.385450 kubelet[2812]: I1104 23:56:26.385465 2812 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747bb446-7545-4615-a310-0f2e7073dd98-whisker-ca-bundle\") on node \"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8\" DevicePath \"\"" Nov 4 23:56:26.626286 systemd[1]: var-lib-kubelet-pods-747bb446\x2d7545\x2d4615\x2da310\x2d0f2e7073dd98-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:56:26.626457 systemd[1]: var-lib-kubelet-pods-747bb446\x2d7545\x2d4615\x2da310\x2d0f2e7073dd98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrhg6.mount: Deactivated successfully. Nov 4 23:56:26.670168 systemd[1]: Removed slice kubepods-besteffort-pod747bb446_7545_4615_a310_0f2e7073dd98.slice - libcontainer container kubepods-besteffort-pod747bb446_7545_4615_a310_0f2e7073dd98.slice. Nov 4 23:56:26.705588 kubelet[2812]: I1104 23:56:26.705484 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cd9nd" podStartSLOduration=1.711266205 podStartE2EDuration="19.705456894s" podCreationTimestamp="2025-11-04 23:56:07 +0000 UTC" firstStartedPulling="2025-11-04 23:56:07.683158262 +0000 UTC m=+24.471342073" lastFinishedPulling="2025-11-04 23:56:25.677348957 +0000 UTC m=+42.465532762" observedRunningTime="2025-11-04 23:56:26.68914295 +0000 UTC m=+43.477326766" watchObservedRunningTime="2025-11-04 23:56:26.705456894 +0000 UTC m=+43.493640710" Nov 4 23:56:26.767942 systemd[1]: Created slice kubepods-besteffort-pod78eeee54_6dd6_435f_b279_78592fdc8b44.slice - libcontainer container kubepods-besteffort-pod78eeee54_6dd6_435f_b279_78592fdc8b44.slice. Nov 4 23:56:26.890134 kubelet[2812]: I1104 23:56:26.889979 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78eeee54-6dd6-435f-b279-78592fdc8b44-whisker-ca-bundle\") pod \"whisker-58665d7946-76phh\" (UID: \"78eeee54-6dd6-435f-b279-78592fdc8b44\") " pod="calico-system/whisker-58665d7946-76phh" Nov 4 23:56:26.890134 kubelet[2812]: I1104 23:56:26.890063 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78eeee54-6dd6-435f-b279-78592fdc8b44-whisker-backend-key-pair\") pod \"whisker-58665d7946-76phh\" (UID: \"78eeee54-6dd6-435f-b279-78592fdc8b44\") " pod="calico-system/whisker-58665d7946-76phh" Nov 4 23:56:26.890134 kubelet[2812]: I1104 23:56:26.890090 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcb4\" (UniqueName: \"kubernetes.io/projected/78eeee54-6dd6-435f-b279-78592fdc8b44-kube-api-access-zpcb4\") pod \"whisker-58665d7946-76phh\" (UID: \"78eeee54-6dd6-435f-b279-78592fdc8b44\") " pod="calico-system/whisker-58665d7946-76phh" Nov 4 23:56:27.073948 containerd[1607]: time="2025-11-04T23:56:27.073891974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58665d7946-76phh,Uid:78eeee54-6dd6-435f-b279-78592fdc8b44,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:27.224472 systemd-networkd[1493]: calid12f629e9fb: Link UP Nov 4 23:56:27.224828 systemd-networkd[1493]: calid12f629e9fb: Gained carrier Nov 4 23:56:27.251302 containerd[1607]: 2025-11-04 23:56:27.111 [INFO][3884] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:27.251302 containerd[1607]: 2025-11-04 23:56:27.126 [INFO][3884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0 whisker-58665d7946- calico-system 78eeee54-6dd6-435f-b279-78592fdc8b44 886 0 2025-11-04 23:56:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58665d7946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 whisker-58665d7946-76phh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid12f629e9fb [] [] }} ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-" Nov 4 23:56:27.251302 containerd[1607]: 2025-11-04 23:56:27.126 [INFO][3884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.251302 containerd[1607]: 2025-11-04 23:56:27.163 [INFO][3897] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" HandleID="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.163 [INFO][3897] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" HandleID="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"whisker-58665d7946-76phh", "timestamp":"2025-11-04 23:56:27.163211305 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.163 [INFO][3897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.163 [INFO][3897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.164 [INFO][3897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.173 [INFO][3897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.179 [INFO][3897] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.187 [INFO][3897] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251691 containerd[1607]: 2025-11-04 23:56:27.189 [INFO][3897] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.192 [INFO][3897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.192 [INFO][3897] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.194 [INFO][3897] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077 Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.199 [INFO][3897] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.207 [INFO][3897] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.193/26] block=192.168.104.192/26 handle="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.207 [INFO][3897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.193/26] handle="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.207 [INFO][3897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:27.251970 containerd[1607]: 2025-11-04 23:56:27.207 [INFO][3897] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.193/26] IPv6=[] ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" HandleID="k8s-pod-network.4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.252178 containerd[1607]: 2025-11-04 23:56:27.212 [INFO][3884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0", GenerateName:"whisker-58665d7946-", Namespace:"calico-system", SelfLink:"", UID:"78eeee54-6dd6-435f-b279-78592fdc8b44", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58665d7946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"whisker-58665d7946-76phh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid12f629e9fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:27.253106 containerd[1607]: 2025-11-04 23:56:27.212 [INFO][3884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.193/32] ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.253106 containerd[1607]: 2025-11-04 23:56:27.212 [INFO][3884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid12f629e9fb ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.253106 containerd[1607]: 2025-11-04 23:56:27.225 [INFO][3884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.254376 containerd[1607]: 2025-11-04 23:56:27.227 [INFO][3884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0", GenerateName:"whisker-58665d7946-", Namespace:"calico-system", SelfLink:"", UID:"78eeee54-6dd6-435f-b279-78592fdc8b44", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58665d7946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077", Pod:"whisker-58665d7946-76phh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid12f629e9fb", MAC:"4a:76:ee:c2:7a:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:27.254531 containerd[1607]: 2025-11-04 23:56:27.243 [INFO][3884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" Namespace="calico-system" Pod="whisker-58665d7946-76phh" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-whisker--58665d7946--76phh-eth0" Nov 4 23:56:27.286225 containerd[1607]: time="2025-11-04T23:56:27.286148484Z" level=info msg="connecting to shim 4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077" address="unix:///run/containerd/s/7a38efb8b1e13c517b9c2b09ac140ea77cb1b4e13c0ae08c1885b9e76cfb9bf4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:27.322535 systemd[1]: Started cri-containerd-4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077.scope - libcontainer container 4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077. Nov 4 23:56:27.390585 containerd[1607]: time="2025-11-04T23:56:27.390491257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58665d7946-76phh,Uid:78eeee54-6dd6-435f-b279-78592fdc8b44,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b8c6c2246fa896954ada6627adce2aa9409d5ca3a25cd1aa237b527beb92077\"" Nov 4 23:56:27.393609 containerd[1607]: time="2025-11-04T23:56:27.393569156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:27.417971 kubelet[2812]: I1104 23:56:27.417921 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747bb446-7545-4615-a310-0f2e7073dd98" path="/var/lib/kubelet/pods/747bb446-7545-4615-a310-0f2e7073dd98/volumes" Nov 4 23:56:27.550884 containerd[1607]: time="2025-11-04T23:56:27.550822279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:27.553017 containerd[1607]: time="2025-11-04T23:56:27.552934873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:27.554371 containerd[1607]: time="2025-11-04T23:56:27.552951439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:27.554569 kubelet[2812]: E1104 23:56:27.553431 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:27.554569 kubelet[2812]: E1104 23:56:27.553680 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:27.554792 kubelet[2812]: E1104 23:56:27.554462 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fab1338c80c4f94960d2c3aff537ddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:27.558749 containerd[1607]: time="2025-11-04T23:56:27.558705569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:27.729176 containerd[1607]: time="2025-11-04T23:56:27.729041935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:27.731446 containerd[1607]: time="2025-11-04T23:56:27.731130101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:27.731446 containerd[1607]: time="2025-11-04T23:56:27.731162886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:27.732574 kubelet[2812]: E1104 23:56:27.732519 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:27.732696 kubelet[2812]: E1104 23:56:27.732591 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:27.732823 kubelet[2812]: E1104 23:56:27.732748 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:27.734327 kubelet[2812]: E1104 23:56:27.734251 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:56:28.405324 systemd-networkd[1493]: vxlan.calico: Link UP Nov 4 23:56:28.405346 systemd-networkd[1493]: vxlan.calico: Gained carrier Nov 4 23:56:28.419789 containerd[1607]: time="2025-11-04T23:56:28.417394407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftl8k,Uid:95c4ef41-4f67-4f60-8777-c0dce25ae7f4,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:28.483398 systemd-networkd[1493]: calid12f629e9fb: Gained IPv6LL Nov 4 23:56:28.665493 systemd-networkd[1493]: calif36b1ed65fc: Link UP Nov 4 23:56:28.668769 systemd-networkd[1493]: calif36b1ed65fc: Gained carrier Nov 4 23:56:28.689055 kubelet[2812]: E1104 23:56:28.688978 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:56:28.701729 containerd[1607]: 2025-11-04 23:56:28.531 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0 coredns-668d6bf9bc- kube-system 95c4ef41-4f67-4f60-8777-c0dce25ae7f4 804 0 2025-11-04 23:55:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 coredns-668d6bf9bc-ftl8k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif36b1ed65fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-" Nov 4 23:56:28.701729 containerd[1607]: 2025-11-04 23:56:28.532 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.701729 containerd[1607]: 2025-11-04 23:56:28.594 [INFO][4128] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" HandleID="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.595 [INFO][4128] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" HandleID="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"coredns-668d6bf9bc-ftl8k", "timestamp":"2025-11-04 23:56:28.59454224 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.595 [INFO][4128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.595 [INFO][4128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.595 [INFO][4128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.607 [INFO][4128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.615 [INFO][4128] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.624 [INFO][4128] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.702088 containerd[1607]: 2025-11-04 23:56:28.629 [INFO][4128] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.633 [INFO][4128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.634 [INFO][4128] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.636 [INFO][4128] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.643 [INFO][4128] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.655 [INFO][4128] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.194/26] block=192.168.104.192/26 handle="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.656 [INFO][4128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.194/26] handle="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.656 [INFO][4128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:28.703972 containerd[1607]: 2025-11-04 23:56:28.656 [INFO][4128] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.194/26] IPv6=[] ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" HandleID="k8s-pod-network.9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.659 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95c4ef41-4f67-4f60-8777-c0dce25ae7f4", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"coredns-668d6bf9bc-ftl8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif36b1ed65fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.660 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.194/32] ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.660 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif36b1ed65fc ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.663 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.663 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95c4ef41-4f67-4f60-8777-c0dce25ae7f4", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff", Pod:"coredns-668d6bf9bc-ftl8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif36b1ed65fc", MAC:"ae:5b:2d:80:34:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:28.704401 containerd[1607]: 2025-11-04 23:56:28.697 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-ftl8k" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--ftl8k-eth0" Nov 4 23:56:28.756421 containerd[1607]: time="2025-11-04T23:56:28.756313245Z" level=info msg="connecting to shim 9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff" address="unix:///run/containerd/s/308eefc3802afe83e4e4daa56db2461d11ed75de3ec07bf5d17c1a6888ac3aee" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:28.808849 systemd[1]: Started cri-containerd-9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff.scope - libcontainer container 9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff. Nov 4 23:56:28.902844 containerd[1607]: time="2025-11-04T23:56:28.902255577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftl8k,Uid:95c4ef41-4f67-4f60-8777-c0dce25ae7f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff\"" Nov 4 23:56:28.907187 containerd[1607]: time="2025-11-04T23:56:28.906964889Z" level=info msg="CreateContainer within sandbox \"9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:28.927576 containerd[1607]: time="2025-11-04T23:56:28.924457218Z" level=info msg="Container 7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:28.941520 containerd[1607]: time="2025-11-04T23:56:28.941465486Z" level=info msg="CreateContainer within sandbox \"9ecedfe97332940689d2a1f44b2303017a6eec437af1076e690accabb3ab90ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c\"" Nov 4 23:56:28.942254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31004947.mount: Deactivated successfully. Nov 4 23:56:28.947051 containerd[1607]: time="2025-11-04T23:56:28.942528013Z" level=info msg="StartContainer for \"7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c\"" Nov 4 23:56:28.947051 containerd[1607]: time="2025-11-04T23:56:28.945548006Z" level=info msg="connecting to shim 7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c" address="unix:///run/containerd/s/308eefc3802afe83e4e4daa56db2461d11ed75de3ec07bf5d17c1a6888ac3aee" protocol=ttrpc version=3 Nov 4 23:56:28.984543 systemd[1]: Started cri-containerd-7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c.scope - libcontainer container 7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c. Nov 4 23:56:29.039386 containerd[1607]: time="2025-11-04T23:56:29.038798301Z" level=info msg="StartContainer for \"7bc28b552dad7aa18dc624a7332aaeb15f9de05ab07550c69c1992e5d98a5e1c\" returns successfully" Nov 4 23:56:29.415831 containerd[1607]: time="2025-11-04T23:56:29.415771256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w4qr,Uid:ad1b068e-ec25-488d-b894-ad5a0b2e8641,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:29.555872 systemd-networkd[1493]: calic528053108e: Link UP Nov 4 23:56:29.557406 systemd-networkd[1493]: calic528053108e: Gained carrier Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.464 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0 csi-node-driver- calico-system ad1b068e-ec25-488d-b894-ad5a0b2e8641 693 0 2025-11-04 23:56:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 csi-node-driver-4w4qr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic528053108e [] [] }} ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.464 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.502 [INFO][4274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" HandleID="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.502 [INFO][4274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" HandleID="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"csi-node-driver-4w4qr", "timestamp":"2025-11-04 23:56:29.502676109 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.502 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.503 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.503 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.512 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.519 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.525 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.527 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.531 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.531 [INFO][4274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.533 [INFO][4274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.537 [INFO][4274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.548 [INFO][4274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.195/26] block=192.168.104.192/26 handle="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.549 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.195/26] handle="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.549 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:29.581304 containerd[1607]: 2025-11-04 23:56:29.549 [INFO][4274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.195/26] IPv6=[] ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" HandleID="k8s-pod-network.0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.551 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1b068e-ec25-488d-b894-ad5a0b2e8641", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"csi-node-driver-4w4qr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic528053108e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.551 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.195/32] ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.551 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic528053108e ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.558 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.559 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1b068e-ec25-488d-b894-ad5a0b2e8641", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a", Pod:"csi-node-driver-4w4qr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic528053108e", MAC:"aa:a4:2e:cd:ce:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:29.583605 containerd[1607]: 2025-11-04 23:56:29.578 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" Namespace="calico-system" Pod="csi-node-driver-4w4qr" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-csi--node--driver--4w4qr-eth0" Nov 4 23:56:29.621583 containerd[1607]: time="2025-11-04T23:56:29.621510905Z" level=info msg="connecting to shim 0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a" address="unix:///run/containerd/s/7bc69948d1d4c9ce9a20c6f7f91c6229113bb60b1a4022f295a2b98a1a6d296d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:29.657597 systemd[1]: Started cri-containerd-0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a.scope - libcontainer container 0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a. Nov 4 23:56:29.698686 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Nov 4 23:56:29.712950 kubelet[2812]: I1104 23:56:29.712859 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ftl8k" podStartSLOduration=40.712831634 podStartE2EDuration="40.712831634s" podCreationTimestamp="2025-11-04 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:29.710946706 +0000 UTC m=+46.499130523" watchObservedRunningTime="2025-11-04 23:56:29.712831634 +0000 UTC m=+46.501015451" Nov 4 23:56:29.728915 containerd[1607]: time="2025-11-04T23:56:29.728831691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w4qr,Uid:ad1b068e-ec25-488d-b894-ad5a0b2e8641,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d75cb8cb2fbbdcf9e0d6cbd3f38a090463d4f078a8c162b78ae7ec192fd678a\"" Nov 4 23:56:29.733645 containerd[1607]: time="2025-11-04T23:56:29.733542862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:56:29.890601 systemd-networkd[1493]: calif36b1ed65fc: Gained IPv6LL Nov 4 23:56:29.911781 containerd[1607]: time="2025-11-04T23:56:29.911706512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:29.913359 containerd[1607]: time="2025-11-04T23:56:29.913290425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:56:29.913547 containerd[1607]: time="2025-11-04T23:56:29.913307276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:56:29.913683 kubelet[2812]: E1104 23:56:29.913636 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:29.913788 kubelet[2812]: E1104 23:56:29.913704 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:29.913954 kubelet[2812]: E1104 23:56:29.913891 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:29.917043 containerd[1607]: time="2025-11-04T23:56:29.917005985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:56:29.978895 kubelet[2812]: I1104 23:56:29.978741 2812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:56:30.071817 containerd[1607]: time="2025-11-04T23:56:30.071470099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:30.073132 containerd[1607]: time="2025-11-04T23:56:30.073052181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:56:30.073444 containerd[1607]: time="2025-11-04T23:56:30.073064912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:56:30.073935 kubelet[2812]: E1104 23:56:30.073888 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:30.074386 kubelet[2812]: E1104 23:56:30.074345 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:30.074769 kubelet[2812]: E1104 23:56:30.074652 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:30.076320 kubelet[2812]: E1104 23:56:30.076210 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:30.081986 containerd[1607]: time="2025-11-04T23:56:30.081917319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\" id:\"e82d54d2732beb4b32bfdd91b38dad9eae2daf5654ac9b4e71a55345ca8572b7\" pid:4356 exited_at:{seconds:1762300590 nanos:81399316}" Nov 4 23:56:30.206248 containerd[1607]: time="2025-11-04T23:56:30.206175000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\" id:\"0cad4ce140766beff47b4f32e69b30ff3562a4c67db9af1f0e22b820ef645f1a\" pid:4380 exited_at:{seconds:1762300590 nanos:205792800}" Nov 4 23:56:30.415825 containerd[1607]: time="2025-11-04T23:56:30.415710835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zs7cl,Uid:ee9f53df-c688-4c13-8b56-bd8cb9b0e064,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:30.416251 containerd[1607]: time="2025-11-04T23:56:30.415840481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8fzw,Uid:fb4aa393-f03a-4f04-a545-20b10128cfa9,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:30.416746 containerd[1607]: time="2025-11-04T23:56:30.416541225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-tc66q,Uid:0cfb7e2a-0604-407b-ae48-6da4047f5d80,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:30.416746 containerd[1607]: time="2025-11-04T23:56:30.416646105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547c989ccf-8nsrm,Uid:196a06c5-2bf4-4f10-938e-eef198e9214f,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:30.704959 kubelet[2812]: E1104 23:56:30.704434 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:30.795631 systemd-networkd[1493]: cali86af48f92ca: Link UP Nov 4 23:56:30.799621 systemd-networkd[1493]: cali86af48f92ca: Gained carrier Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.548 [INFO][4392] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0 coredns-668d6bf9bc- kube-system ee9f53df-c688-4c13-8b56-bd8cb9b0e064 795 0 2025-11-04 23:55:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 coredns-668d6bf9bc-zs7cl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86af48f92ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.548 [INFO][4392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.663 [INFO][4443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" HandleID="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.665 [INFO][4443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" HandleID="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"coredns-668d6bf9bc-zs7cl", "timestamp":"2025-11-04 23:56:30.66358299 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.666 [INFO][4443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.666 [INFO][4443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.666 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.683 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.697 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.723 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.735 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.749 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.749 [INFO][4443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.752 [INFO][4443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9 Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.765 [INFO][4443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.775 [INFO][4443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.196/26] block=192.168.104.192/26 handle="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.775 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.196/26] handle="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.775 [INFO][4443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:30.833190 containerd[1607]: 2025-11-04 23:56:30.775 [INFO][4443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.196/26] IPv6=[] ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" HandleID="k8s-pod-network.fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.781 [INFO][4392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee9f53df-c688-4c13-8b56-bd8cb9b0e064", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"coredns-668d6bf9bc-zs7cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86af48f92ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.781 [INFO][4392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.196/32] ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.781 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86af48f92ca ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.798 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.799 [INFO][4392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee9f53df-c688-4c13-8b56-bd8cb9b0e064", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9", Pod:"coredns-668d6bf9bc-zs7cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86af48f92ca", MAC:"46:26:58:89:00:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:30.835192 containerd[1607]: 2025-11-04 23:56:30.822 [INFO][4392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" Namespace="kube-system" Pod="coredns-668d6bf9bc-zs7cl" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-coredns--668d6bf9bc--zs7cl-eth0" Nov 4 23:56:30.889939 containerd[1607]: time="2025-11-04T23:56:30.889860395Z" level=info msg="connecting to shim fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9" address="unix:///run/containerd/s/65b5d4705d81a2fcc9f24c3151abef326f6752522987783be5c74594350bb185" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:30.910750 systemd-networkd[1493]: cali7c74a795185: Link UP Nov 4 23:56:30.918259 systemd-networkd[1493]: cali7c74a795185: Gained carrier Nov 4 23:56:30.992486 systemd[1]: Started cri-containerd-fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9.scope - libcontainer container fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9. Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.604 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0 goldmane-666569f655- calico-system fb4aa393-f03a-4f04-a545-20b10128cfa9 802 0 2025-11-04 23:56:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 goldmane-666569f655-x8fzw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c74a795185 [] [] }} ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.604 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.731 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" HandleID="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.732 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" HandleID="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"goldmane-666569f655-x8fzw", "timestamp":"2025-11-04 23:56:30.731900191 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.732 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.775 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.776 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.796 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.818 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.837 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.842 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.850 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.852 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.857 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8 Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.868 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.890 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.197/26] block=192.168.104.192/26 handle="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.891 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.197/26] handle="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.891 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:31.002067 containerd[1607]: 2025-11-04 23:56:30.891 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.197/26] IPv6=[] ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" HandleID="k8s-pod-network.334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.898 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fb4aa393-f03a-4f04-a545-20b10128cfa9", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"goldmane-666569f655-x8fzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c74a795185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.899 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.197/32] ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.899 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c74a795185 ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.920 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.924 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fb4aa393-f03a-4f04-a545-20b10128cfa9", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8", Pod:"goldmane-666569f655-x8fzw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c74a795185", MAC:"3e:29:4b:c6:ca:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.004195 containerd[1607]: 2025-11-04 23:56:30.976 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" Namespace="calico-system" Pod="goldmane-666569f655-x8fzw" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-goldmane--666569f655--x8fzw-eth0" Nov 4 23:56:31.072556 containerd[1607]: time="2025-11-04T23:56:31.072492527Z" level=info msg="connecting to shim 334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8" address="unix:///run/containerd/s/034516f635a883651d796b58b6a0bd801c6c3b09b5fe256d3497e8500ce4950b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:31.096194 systemd-networkd[1493]: cali161161d0275: Link UP Nov 4 23:56:31.104409 systemd-networkd[1493]: cali161161d0275: Gained carrier Nov 4 23:56:31.106524 systemd-networkd[1493]: calic528053108e: Gained IPv6LL Nov 4 23:56:31.177654 systemd[1]: Started cri-containerd-334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8.scope - libcontainer container 334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8. Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.614 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0 calico-kube-controllers-547c989ccf- calico-system 196a06c5-2bf4-4f10-938e-eef198e9214f 803 0 2025-11-04 23:56:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547c989ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 calico-kube-controllers-547c989ccf-8nsrm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali161161d0275 [] [] }} ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.614 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.755 [INFO][4454] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" HandleID="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.755 [INFO][4454] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" HandleID="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"calico-kube-controllers-547c989ccf-8nsrm", "timestamp":"2025-11-04 23:56:30.755202489 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.755 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.892 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.892 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.923 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:30.954 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.004 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.009 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.017 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.018 [INFO][4454] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.021 [INFO][4454] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.033 [INFO][4454] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.061 [INFO][4454] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.198/26] block=192.168.104.192/26 handle="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.063 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.198/26] handle="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.064 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:31.179535 containerd[1607]: 2025-11-04 23:56:31.065 [INFO][4454] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.198/26] IPv6=[] ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" HandleID="k8s-pod-network.70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.076 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0", GenerateName:"calico-kube-controllers-547c989ccf-", Namespace:"calico-system", SelfLink:"", UID:"196a06c5-2bf4-4f10-938e-eef198e9214f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547c989ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"calico-kube-controllers-547c989ccf-8nsrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali161161d0275", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.076 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.198/32] ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.077 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali161161d0275 ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.109 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.123 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0", GenerateName:"calico-kube-controllers-547c989ccf-", Namespace:"calico-system", SelfLink:"", UID:"196a06c5-2bf4-4f10-938e-eef198e9214f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547c989ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab", Pod:"calico-kube-controllers-547c989ccf-8nsrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali161161d0275", MAC:"72:45:d2:02:30:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.183004 containerd[1607]: 2025-11-04 23:56:31.169 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" Namespace="calico-system" Pod="calico-kube-controllers-547c989ccf-8nsrm" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--kube--controllers--547c989ccf--8nsrm-eth0" Nov 4 23:56:31.277396 containerd[1607]: time="2025-11-04T23:56:31.277327227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zs7cl,Uid:ee9f53df-c688-4c13-8b56-bd8cb9b0e064,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9\"" Nov 4 23:56:31.277735 containerd[1607]: time="2025-11-04T23:56:31.277706045Z" level=info msg="connecting to shim 70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab" address="unix:///run/containerd/s/372c17d1cd48617b67aa98e6d68111557f0cd5b15c11a6609b67442b7a15a314" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:31.289978 containerd[1607]: time="2025-11-04T23:56:31.289913734Z" level=info msg="CreateContainer within sandbox \"fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:31.307436 systemd-networkd[1493]: calice0c61fa3cf: Link UP Nov 4 23:56:31.313670 systemd-networkd[1493]: calice0c61fa3cf: Gained carrier Nov 4 23:56:31.334961 containerd[1607]: time="2025-11-04T23:56:31.334901540Z" level=info msg="Container 973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:31.344747 containerd[1607]: time="2025-11-04T23:56:31.344638205Z" level=info msg="CreateContainer within sandbox \"fc7c28bc89d23ea8c5896e52cb0f12405d8e28853cf1f3211bff5d4b0b20aed9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1\"" Nov 4 23:56:31.346358 containerd[1607]: time="2025-11-04T23:56:31.346201891Z" level=info msg="StartContainer for \"973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1\"" Nov 4 23:56:31.349386 containerd[1607]: time="2025-11-04T23:56:31.349325342Z" level=info msg="connecting to shim 973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1" address="unix:///run/containerd/s/65b5d4705d81a2fcc9f24c3151abef326f6752522987783be5c74594350bb185" protocol=ttrpc version=3 Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:30.621 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0 calico-apiserver-5cd57c848d- calico-apiserver 0cfb7e2a-0604-407b-ae48-6da4047f5d80 800 0 2025-11-04 23:56:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd57c848d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 calico-apiserver-5cd57c848d-tc66q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice0c61fa3cf [] [] }} ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:30.622 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:30.766 [INFO][4463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" HandleID="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:30.766 [INFO][4463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" HandleID="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"calico-apiserver-5cd57c848d-tc66q", "timestamp":"2025-11-04 23:56:30.766459216 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:30.766 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.064 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.064 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.144 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.166 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.191 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.194 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.200 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.200 [INFO][4463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.206 [INFO][4463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646 Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.218 [INFO][4463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.283 [INFO][4463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.199/26] block=192.168.104.192/26 handle="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.283 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.199/26] handle="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.284 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:31.403028 containerd[1607]: 2025-11-04 23:56:31.284 [INFO][4463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.199/26] IPv6=[] ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" HandleID="k8s-pod-network.d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.295 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0", GenerateName:"calico-apiserver-5cd57c848d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cfb7e2a-0604-407b-ae48-6da4047f5d80", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd57c848d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"calico-apiserver-5cd57c848d-tc66q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice0c61fa3cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.295 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.199/32] ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.296 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice0c61fa3cf ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.319 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.327 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0", GenerateName:"calico-apiserver-5cd57c848d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cfb7e2a-0604-407b-ae48-6da4047f5d80", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd57c848d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646", Pod:"calico-apiserver-5cd57c848d-tc66q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice0c61fa3cf", MAC:"be:44:df:f9:3f:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.404170 containerd[1607]: 2025-11-04 23:56:31.386 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-tc66q" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--tc66q-eth0" Nov 4 23:56:31.407581 systemd[1]: Started cri-containerd-70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab.scope - libcontainer container 70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab. Nov 4 23:56:31.417182 containerd[1607]: time="2025-11-04T23:56:31.417092526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-bjqgd,Uid:15a4baa5-351d-4330-9bf0-0494048d0ffd,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:31.425533 systemd[1]: Started cri-containerd-973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1.scope - libcontainer container 973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1. Nov 4 23:56:31.509073 containerd[1607]: time="2025-11-04T23:56:31.508963255Z" level=info msg="connecting to shim d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646" address="unix:///run/containerd/s/49f205c3ec959348294fb4bd862a4a878f39e6f88f111154a0b3561776067398" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:31.606791 systemd[1]: Started cri-containerd-d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646.scope - libcontainer container d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646. Nov 4 23:56:31.651527 containerd[1607]: time="2025-11-04T23:56:31.651186672Z" level=info msg="StartContainer for \"973cbabe7dab50a5b7097d97137982ed340a52a6bb34e4b8fa993d102c98bfc1\" returns successfully" Nov 4 23:56:31.724668 kubelet[2812]: E1104 23:56:31.724513 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:31.918436 systemd-networkd[1493]: cali84fbe778934: Link UP Nov 4 23:56:31.922397 systemd-networkd[1493]: cali84fbe778934: Gained carrier Nov 4 23:56:31.953473 kubelet[2812]: I1104 23:56:31.952333 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zs7cl" podStartSLOduration=42.952306771 podStartE2EDuration="42.952306771s" podCreationTimestamp="2025-11-04 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:31.821662248 +0000 UTC m=+48.609846059" watchObservedRunningTime="2025-11-04 23:56:31.952306771 +0000 UTC m=+48.740490587" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.616 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0 calico-apiserver-5cd57c848d- calico-apiserver 15a4baa5-351d-4330-9bf0-0494048d0ffd 806 0 2025-11-04 23:56:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd57c848d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8 calico-apiserver-5cd57c848d-bjqgd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84fbe778934 [] [] }} ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.616 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.735 [INFO][4688] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" HandleID="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.735 [INFO][4688] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" HandleID="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", "pod":"calico-apiserver-5cd57c848d-bjqgd", "timestamp":"2025-11-04 23:56:31.735007298 +0000 UTC"}, Hostname:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.737 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.737 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.737 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8' Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.765 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.830 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.842 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.846 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.851 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.851 [INFO][4688] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.854 [INFO][4688] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051 Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.875 [INFO][4688] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.897 [INFO][4688] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.200/26] block=192.168.104.192/26 handle="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.899 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.200/26] handle="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" host="ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8" Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.899 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:31.955039 containerd[1607]: 2025-11-04 23:56:31.899 [INFO][4688] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.200/26] IPv6=[] ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" HandleID="k8s-pod-network.5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Workload="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.905 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0", GenerateName:"calico-apiserver-5cd57c848d-", Namespace:"calico-apiserver", SelfLink:"", UID:"15a4baa5-351d-4330-9bf0-0494048d0ffd", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd57c848d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"", Pod:"calico-apiserver-5cd57c848d-bjqgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84fbe778934", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.906 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.200/32] ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.906 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84fbe778934 ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.925 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.927 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0", GenerateName:"calico-apiserver-5cd57c848d-", Namespace:"calico-apiserver", SelfLink:"", UID:"15a4baa5-351d-4330-9bf0-0494048d0ffd", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd57c848d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-nightly-20251104-2100-3aa36c367cb32d248ff8", ContainerID:"5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051", Pod:"calico-apiserver-5cd57c848d-bjqgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84fbe778934", MAC:"3a:16:9b:db:57:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:31.958558 containerd[1607]: 2025-11-04 23:56:31.950 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" Namespace="calico-apiserver" Pod="calico-apiserver-5cd57c848d-bjqgd" WorkloadEndpoint="ci--4487--0--0--nightly--20251104--2100--3aa36c367cb32d248ff8-k8s-calico--apiserver--5cd57c848d--bjqgd-eth0" Nov 4 23:56:32.013385 containerd[1607]: time="2025-11-04T23:56:32.012358466Z" level=info msg="connecting to shim 5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051" address="unix:///run/containerd/s/f5e69b4da11819d04b213a661e7c76ff319b694beff2094e131b2ea853560479" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:32.088819 systemd[1]: Started cri-containerd-5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051.scope - libcontainer container 5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051. Nov 4 23:56:32.137559 containerd[1607]: time="2025-11-04T23:56:32.134913863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547c989ccf-8nsrm,Uid:196a06c5-2bf4-4f10-938e-eef198e9214f,Namespace:calico-system,Attempt:0,} returns sandbox id \"70133de2d02d5fc209d5a074e1e10e247e890674d4f131dadf41e9138961efab\"" Nov 4 23:56:32.152666 containerd[1607]: time="2025-11-04T23:56:32.151426061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:32.166523 containerd[1607]: time="2025-11-04T23:56:32.166341582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x8fzw,Uid:fb4aa393-f03a-4f04-a545-20b10128cfa9,Namespace:calico-system,Attempt:0,} returns sandbox id \"334ab9818807b1b72077436ce59c7d3c6163cf024ee386d6030349b5ca9dbce8\"" Nov 4 23:56:32.282437 containerd[1607]: time="2025-11-04T23:56:32.282384526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-bjqgd,Uid:15a4baa5-351d-4330-9bf0-0494048d0ffd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5644776804b97d2165ce6d7b914b4e46cc0939479a7ba00e3825ce146d2d9051\"" Nov 4 23:56:32.314837 containerd[1607]: time="2025-11-04T23:56:32.314688234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:32.319375 containerd[1607]: time="2025-11-04T23:56:32.319210068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:32.319375 containerd[1607]: time="2025-11-04T23:56:32.319328781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:32.320415 kubelet[2812]: E1104 23:56:32.319695 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:32.320415 kubelet[2812]: E1104 23:56:32.319765 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:32.321313 kubelet[2812]: E1104 23:56:32.321078 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m874f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547c989ccf-8nsrm_calico-system(196a06c5-2bf4-4f10-938e-eef198e9214f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:32.322489 kubelet[2812]: E1104 23:56:32.322366 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:56:32.322790 containerd[1607]: time="2025-11-04T23:56:32.322581327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:32.355837 containerd[1607]: time="2025-11-04T23:56:32.355781267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd57c848d-tc66q,Uid:0cfb7e2a-0604-407b-ae48-6da4047f5d80,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d5034a8bae1290fba7e71d64a88b2ccbf821987422017fc2e8c851a24886d646\"" Nov 4 23:56:32.386493 systemd-networkd[1493]: cali7c74a795185: Gained IPv6LL Nov 4 23:56:32.489124 containerd[1607]: time="2025-11-04T23:56:32.489048708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:32.491265 containerd[1607]: time="2025-11-04T23:56:32.491194895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:32.491615 containerd[1607]: time="2025-11-04T23:56:32.491469834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:32.492080 kubelet[2812]: E1104 23:56:32.492033 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:32.492381 kubelet[2812]: E1104 23:56:32.492256 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:32.493096 containerd[1607]: time="2025-11-04T23:56:32.492880712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:32.494166 kubelet[2812]: E1104 23:56:32.493929 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwp5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8fzw_calico-system(fb4aa393-f03a-4f04-a545-20b10128cfa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:32.495660 kubelet[2812]: E1104 23:56:32.495607 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:56:32.514563 systemd-networkd[1493]: calice0c61fa3cf: Gained IPv6LL Nov 4 23:56:32.664108 containerd[1607]: time="2025-11-04T23:56:32.663365500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:32.666336 containerd[1607]: time="2025-11-04T23:56:32.665899597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:32.666336 containerd[1607]: time="2025-11-04T23:56:32.665957616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:32.668166 kubelet[2812]: E1104 23:56:32.667542 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:32.668666 kubelet[2812]: E1104 23:56:32.668379 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:32.669178 kubelet[2812]: E1104 23:56:32.668973 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpzl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-bjqgd_calico-apiserver(15a4baa5-351d-4330-9bf0-0494048d0ffd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:32.670223 containerd[1607]: time="2025-11-04T23:56:32.670136833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:32.670547 kubelet[2812]: E1104 23:56:32.670339 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:32.727084 kubelet[2812]: E1104 23:56:32.726927 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:56:32.727084 kubelet[2812]: E1104 23:56:32.727039 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:32.737265 kubelet[2812]: E1104 23:56:32.736860 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:56:32.837067 systemd-networkd[1493]: cali86af48f92ca: Gained IPv6LL Nov 4 23:56:32.841472 containerd[1607]: time="2025-11-04T23:56:32.841042967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:32.843454 containerd[1607]: time="2025-11-04T23:56:32.843397724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:32.843454 containerd[1607]: time="2025-11-04T23:56:32.843493555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:32.844889 kubelet[2812]: E1104 23:56:32.844127 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:32.844889 kubelet[2812]: E1104 23:56:32.844358 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:32.844889 kubelet[2812]: E1104 23:56:32.844597 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-tc66q_calico-apiserver(0cfb7e2a-0604-407b-ae48-6da4047f5d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:32.846891 kubelet[2812]: E1104 23:56:32.846850 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:56:33.157527 systemd-networkd[1493]: cali161161d0275: Gained IPv6LL Nov 4 23:56:33.666604 systemd-networkd[1493]: cali84fbe778934: Gained IPv6LL Nov 4 23:56:33.742521 kubelet[2812]: E1104 23:56:33.742450 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:56:33.743147 kubelet[2812]: E1104 23:56:33.743065 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:56:33.743995 kubelet[2812]: E1104 23:56:33.743152 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:56:33.745897 kubelet[2812]: E1104 23:56:33.745826 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:35.878210 ntpd[1679]: Listen normally on 6 vxlan.calico 192.168.104.192:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 6 vxlan.calico 192.168.104.192:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 7 calid12f629e9fb [fe80::ecee:eeff:feee:eeee%4]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 8 vxlan.calico [fe80::64bf:95ff:fe70:ffc7%5]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 9 calif36b1ed65fc [fe80::ecee:eeff:feee:eeee%8]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 10 calic528053108e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 11 cali86af48f92ca [fe80::ecee:eeff:feee:eeee%10]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 12 cali7c74a795185 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 13 cali161161d0275 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 14 calice0c61fa3cf [fe80::ecee:eeff:feee:eeee%13]:123 Nov 4 23:56:35.879363 ntpd[1679]: 4 Nov 23:56:35 ntpd[1679]: Listen normally on 15 cali84fbe778934 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 4 23:56:35.878342 ntpd[1679]: Listen normally on 7 calid12f629e9fb [fe80::ecee:eeff:feee:eeee%4]:123 Nov 4 23:56:35.878395 ntpd[1679]: Listen normally on 8 vxlan.calico [fe80::64bf:95ff:fe70:ffc7%5]:123 Nov 4 23:56:35.878435 ntpd[1679]: Listen normally on 9 calif36b1ed65fc [fe80::ecee:eeff:feee:eeee%8]:123 Nov 4 23:56:35.878475 ntpd[1679]: Listen normally on 10 calic528053108e [fe80::ecee:eeff:feee:eeee%9]:123 Nov 4 23:56:35.878512 ntpd[1679]: Listen normally on 11 cali86af48f92ca [fe80::ecee:eeff:feee:eeee%10]:123 Nov 4 23:56:35.878551 ntpd[1679]: Listen normally on 12 cali7c74a795185 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 4 23:56:35.878589 ntpd[1679]: Listen normally on 13 cali161161d0275 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 4 23:56:35.878641 ntpd[1679]: Listen normally on 14 calice0c61fa3cf [fe80::ecee:eeff:feee:eeee%13]:123 Nov 4 23:56:35.878682 ntpd[1679]: Listen normally on 15 cali84fbe778934 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 4 23:56:43.423534 containerd[1607]: time="2025-11-04T23:56:43.423475803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:43.597655 containerd[1607]: time="2025-11-04T23:56:43.597421583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:43.599372 containerd[1607]: time="2025-11-04T23:56:43.599193693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:43.599699 containerd[1607]: time="2025-11-04T23:56:43.599248085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:43.600170 kubelet[2812]: E1104 23:56:43.600057 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:43.601294 kubelet[2812]: E1104 23:56:43.600149 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:43.601678 kubelet[2812]: E1104 23:56:43.601575 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fab1338c80c4f94960d2c3aff537ddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:43.604999 containerd[1607]: time="2025-11-04T23:56:43.604636243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:43.756987 containerd[1607]: time="2025-11-04T23:56:43.756929389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:43.759048 containerd[1607]: time="2025-11-04T23:56:43.758834947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:43.759048 containerd[1607]: time="2025-11-04T23:56:43.758990828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:43.759752 kubelet[2812]: E1104 23:56:43.759640 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:43.759955 kubelet[2812]: E1104 23:56:43.759725 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:43.760807 kubelet[2812]: E1104 23:56:43.760714 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:43.762263 kubelet[2812]: E1104 23:56:43.762157 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:56:44.418501 containerd[1607]: time="2025-11-04T23:56:44.417385722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:44.577096 containerd[1607]: time="2025-11-04T23:56:44.576955424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:44.579024 containerd[1607]: time="2025-11-04T23:56:44.578963276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:44.579181 containerd[1607]: time="2025-11-04T23:56:44.579099253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:44.579690 kubelet[2812]: E1104 23:56:44.579399 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:44.579690 kubelet[2812]: E1104 23:56:44.579469 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:44.580241 containerd[1607]: time="2025-11-04T23:56:44.580170765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:56:44.581293 kubelet[2812]: E1104 23:56:44.579967 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-tc66q_calico-apiserver(0cfb7e2a-0604-407b-ae48-6da4047f5d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:44.582386 kubelet[2812]: E1104 23:56:44.582344 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:56:44.739364 containerd[1607]: time="2025-11-04T23:56:44.738867749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:44.741503 containerd[1607]: time="2025-11-04T23:56:44.741419186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:56:44.741799 containerd[1607]: time="2025-11-04T23:56:44.741486106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:56:44.742223 kubelet[2812]: E1104 23:56:44.742162 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:44.744503 kubelet[2812]: E1104 23:56:44.742680 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:44.744503 kubelet[2812]: E1104 23:56:44.744419 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:44.747503 containerd[1607]: time="2025-11-04T23:56:44.747338105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:56:44.919258 containerd[1607]: time="2025-11-04T23:56:44.919121973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:44.921163 containerd[1607]: time="2025-11-04T23:56:44.920987309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:56:44.921163 containerd[1607]: time="2025-11-04T23:56:44.921113769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:56:44.921498 kubelet[2812]: E1104 23:56:44.921439 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:44.923146 kubelet[2812]: E1104 23:56:44.921516 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:44.923146 kubelet[2812]: E1104 23:56:44.921695 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:44.923503 kubelet[2812]: E1104 23:56:44.923420 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:46.417767 containerd[1607]: time="2025-11-04T23:56:46.417713685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:46.582306 containerd[1607]: time="2025-11-04T23:56:46.581205724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:46.583108 containerd[1607]: time="2025-11-04T23:56:46.583046123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:46.583242 containerd[1607]: time="2025-11-04T23:56:46.583167566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:46.583535 kubelet[2812]: E1104 23:56:46.583476 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:46.584762 kubelet[2812]: E1104 23:56:46.584123 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:46.585587 kubelet[2812]: E1104 23:56:46.585481 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwp5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8fzw_calico-system(fb4aa393-f03a-4f04-a545-20b10128cfa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:46.587117 kubelet[2812]: E1104 23:56:46.587059 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:56:47.416470 containerd[1607]: time="2025-11-04T23:56:47.416368753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:47.585719 containerd[1607]: time="2025-11-04T23:56:47.585465140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:47.587332 containerd[1607]: time="2025-11-04T23:56:47.587164928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:47.587332 containerd[1607]: time="2025-11-04T23:56:47.587199399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:47.587830 kubelet[2812]: E1104 23:56:47.587736 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:47.587830 kubelet[2812]: E1104 23:56:47.587802 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:47.589048 kubelet[2812]: E1104 23:56:47.588546 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpzl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-bjqgd_calico-apiserver(15a4baa5-351d-4330-9bf0-0494048d0ffd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:47.590565 kubelet[2812]: E1104 23:56:47.590507 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:48.416878 containerd[1607]: time="2025-11-04T23:56:48.416828673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:48.612555 containerd[1607]: time="2025-11-04T23:56:48.612252488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:48.614215 containerd[1607]: time="2025-11-04T23:56:48.614059835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:48.614215 containerd[1607]: time="2025-11-04T23:56:48.614094996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:48.614516 kubelet[2812]: E1104 23:56:48.614445 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:48.615862 kubelet[2812]: E1104 23:56:48.614531 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:48.615862 kubelet[2812]: E1104 23:56:48.614722 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m874f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547c989ccf-8nsrm_calico-system(196a06c5-2bf4-4f10-938e-eef198e9214f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:48.616508 kubelet[2812]: E1104 23:56:48.616454 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:56:55.339341 systemd[1]: Started sshd@8-10.128.0.112:22-139.178.68.195:51138.service - OpenSSH per-connection server daemon (139.178.68.195:51138). Nov 4 23:56:55.424575 kubelet[2812]: E1104 23:56:55.424509 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:56:55.685018 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 51138 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:56:55.688155 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:55.700389 systemd-logind[1581]: New session 8 of user core. Nov 4 23:56:55.708797 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:56:56.063595 sshd[4848]: Connection closed by 139.178.68.195 port 51138 Nov 4 23:56:56.065427 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:56.075581 systemd-logind[1581]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:56:56.076997 systemd[1]: sshd@8-10.128.0.112:22-139.178.68.195:51138.service: Deactivated successfully. Nov 4 23:56:56.083140 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:56:56.089734 systemd-logind[1581]: Removed session 8. Nov 4 23:56:56.416048 kubelet[2812]: E1104 23:56:56.415892 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:56:56.420364 kubelet[2812]: E1104 23:56:56.420300 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:56:59.421812 kubelet[2812]: E1104 23:56:59.421752 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:56:59.423363 kubelet[2812]: E1104 23:56:59.423311 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:57:00.267601 containerd[1607]: time="2025-11-04T23:57:00.267546798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\" id:\"7ba47cbaca6c651a1ccb2676a45430bb40c488730daf6f3c73f21dd4e82778fc\" pid:4873 exited_at:{seconds:1762300620 nanos:266298643}" Nov 4 23:57:00.416729 kubelet[2812]: E1104 23:57:00.416529 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:57:01.124443 systemd[1]: Started sshd@9-10.128.0.112:22-139.178.68.195:51150.service - OpenSSH per-connection server daemon (139.178.68.195:51150). Nov 4 23:57:01.461734 sshd[4886]: Accepted publickey for core from 139.178.68.195 port 51150 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:01.465004 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:01.486599 systemd-logind[1581]: New session 9 of user core. Nov 4 23:57:01.489236 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:57:01.815489 sshd[4889]: Connection closed by 139.178.68.195 port 51150 Nov 4 23:57:01.816642 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:01.826722 systemd[1]: sshd@9-10.128.0.112:22-139.178.68.195:51150.service: Deactivated successfully. Nov 4 23:57:01.832442 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:57:01.835521 systemd-logind[1581]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:57:01.839499 systemd-logind[1581]: Removed session 9. Nov 4 23:57:06.875619 systemd[1]: Started sshd@10-10.128.0.112:22-139.178.68.195:38014.service - OpenSSH per-connection server daemon (139.178.68.195:38014). Nov 4 23:57:07.229759 sshd[4903]: Accepted publickey for core from 139.178.68.195 port 38014 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:07.232884 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:07.245937 systemd-logind[1581]: New session 10 of user core. Nov 4 23:57:07.251860 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:57:07.420948 containerd[1607]: time="2025-11-04T23:57:07.420873746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:07.589217 containerd[1607]: time="2025-11-04T23:57:07.589157742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:07.592989 containerd[1607]: time="2025-11-04T23:57:07.592908067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:07.593165 containerd[1607]: time="2025-11-04T23:57:07.593039185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:07.594513 kubelet[2812]: E1104 23:57:07.594448 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:07.596505 kubelet[2812]: E1104 23:57:07.594528 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:07.596505 kubelet[2812]: E1104 23:57:07.594714 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-tc66q_calico-apiserver(0cfb7e2a-0604-407b-ae48-6da4047f5d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:07.596505 kubelet[2812]: E1104 23:57:07.596437 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:57:07.623407 sshd[4906]: Connection closed by 139.178.68.195 port 38014 Nov 4 23:57:07.624560 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:07.638590 systemd[1]: sshd@10-10.128.0.112:22-139.178.68.195:38014.service: Deactivated successfully. Nov 4 23:57:07.645319 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:57:07.648436 systemd-logind[1581]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:57:07.652903 systemd-logind[1581]: Removed session 10. Nov 4 23:57:07.688314 systemd[1]: Started sshd@11-10.128.0.112:22-139.178.68.195:38030.service - OpenSSH per-connection server daemon (139.178.68.195:38030). Nov 4 23:57:08.034322 sshd[4919]: Accepted publickey for core from 139.178.68.195 port 38030 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:08.036228 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:08.044066 systemd-logind[1581]: New session 11 of user core. Nov 4 23:57:08.052553 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:57:08.460051 sshd[4922]: Connection closed by 139.178.68.195 port 38030 Nov 4 23:57:08.460968 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:08.477613 systemd-logind[1581]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:57:08.479792 systemd[1]: sshd@11-10.128.0.112:22-139.178.68.195:38030.service: Deactivated successfully. Nov 4 23:57:08.484673 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:57:08.487936 systemd-logind[1581]: Removed session 11. Nov 4 23:57:08.523721 systemd[1]: Started sshd@12-10.128.0.112:22-139.178.68.195:38034.service - OpenSSH per-connection server daemon (139.178.68.195:38034). Nov 4 23:57:08.855308 sshd[4932]: Accepted publickey for core from 139.178.68.195 port 38034 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:08.859132 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:08.871403 systemd-logind[1581]: New session 12 of user core. Nov 4 23:57:08.876700 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:57:09.214344 sshd[4935]: Connection closed by 139.178.68.195 port 38034 Nov 4 23:57:09.216127 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:09.222971 systemd-logind[1581]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:57:09.224694 systemd[1]: sshd@12-10.128.0.112:22-139.178.68.195:38034.service: Deactivated successfully. Nov 4 23:57:09.230575 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:57:09.237113 systemd-logind[1581]: Removed session 12. Nov 4 23:57:10.417198 containerd[1607]: time="2025-11-04T23:57:10.417150375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:57:10.582641 containerd[1607]: time="2025-11-04T23:57:10.582569416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:10.584699 containerd[1607]: time="2025-11-04T23:57:10.584566459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:57:10.584699 containerd[1607]: time="2025-11-04T23:57:10.584693571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:57:10.584957 kubelet[2812]: E1104 23:57:10.584905 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:10.587705 kubelet[2812]: E1104 23:57:10.584973 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:10.587705 kubelet[2812]: E1104 23:57:10.585408 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:10.587975 containerd[1607]: time="2025-11-04T23:57:10.586218370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:57:10.752720 containerd[1607]: time="2025-11-04T23:57:10.752577575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:10.755219 containerd[1607]: time="2025-11-04T23:57:10.755124916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:57:10.755398 containerd[1607]: time="2025-11-04T23:57:10.755160464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:57:10.755862 kubelet[2812]: E1104 23:57:10.755689 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:10.755973 kubelet[2812]: E1104 23:57:10.755911 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:10.756367 kubelet[2812]: E1104 23:57:10.756301 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fab1338c80c4f94960d2c3aff537ddd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:10.757448 containerd[1607]: time="2025-11-04T23:57:10.756956825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:57:10.916115 containerd[1607]: time="2025-11-04T23:57:10.916039730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:10.917844 containerd[1607]: time="2025-11-04T23:57:10.917763926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:57:10.918103 containerd[1607]: time="2025-11-04T23:57:10.917809810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:57:10.918450 kubelet[2812]: E1104 23:57:10.918317 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:10.918450 kubelet[2812]: E1104 23:57:10.918405 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:10.919373 containerd[1607]: time="2025-11-04T23:57:10.919313623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:57:10.919825 kubelet[2812]: E1104 23:57:10.919614 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flnwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4w4qr_calico-system(ad1b068e-ec25-488d-b894-ad5a0b2e8641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:10.921516 kubelet[2812]: E1104 23:57:10.921449 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:57:11.080392 containerd[1607]: time="2025-11-04T23:57:11.079751675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:11.083517 containerd[1607]: time="2025-11-04T23:57:11.083351198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:57:11.083517 containerd[1607]: time="2025-11-04T23:57:11.083475189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:57:11.083984 kubelet[2812]: E1104 23:57:11.083917 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:11.084248 kubelet[2812]: E1104 23:57:11.084198 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:11.085488 kubelet[2812]: E1104 23:57:11.085395 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpcb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58665d7946-76phh_calico-system(78eeee54-6dd6-435f-b279-78592fdc8b44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:11.086940 kubelet[2812]: E1104 23:57:11.086870 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:57:12.419302 containerd[1607]: time="2025-11-04T23:57:12.418381585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:12.577315 containerd[1607]: time="2025-11-04T23:57:12.577191811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:12.579351 containerd[1607]: time="2025-11-04T23:57:12.579247942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:12.579483 containerd[1607]: time="2025-11-04T23:57:12.579314575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:12.580258 kubelet[2812]: E1104 23:57:12.580063 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:12.580803 kubelet[2812]: E1104 23:57:12.580304 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:12.580864 kubelet[2812]: E1104 23:57:12.580800 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpzl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd57c848d-bjqgd_calico-apiserver(15a4baa5-351d-4330-9bf0-0494048d0ffd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:12.582093 kubelet[2812]: E1104 23:57:12.581936 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:57:12.582380 containerd[1607]: time="2025-11-04T23:57:12.582341584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:57:12.749112 containerd[1607]: time="2025-11-04T23:57:12.748242565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:12.751153 containerd[1607]: time="2025-11-04T23:57:12.750994896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:57:12.751153 containerd[1607]: time="2025-11-04T23:57:12.751115109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:12.751695 kubelet[2812]: E1104 23:57:12.751633 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:12.752004 kubelet[2812]: E1104 23:57:12.751882 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:12.753131 kubelet[2812]: E1104 23:57:12.753032 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwp5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x8fzw_calico-system(fb4aa393-f03a-4f04-a545-20b10128cfa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:12.754572 kubelet[2812]: E1104 23:57:12.754510 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:57:14.024833 systemd[1]: Started sshd@13-10.128.0.112:22-103.181.143.104:37970.service - OpenSSH per-connection server daemon (103.181.143.104:37970). Nov 4 23:57:14.272750 systemd[1]: Started sshd@14-10.128.0.112:22-139.178.68.195:60760.service - OpenSSH per-connection server daemon (139.178.68.195:60760). Nov 4 23:57:14.418034 containerd[1607]: time="2025-11-04T23:57:14.417980112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:57:14.589569 containerd[1607]: time="2025-11-04T23:57:14.589427205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:14.591286 containerd[1607]: time="2025-11-04T23:57:14.591220281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:57:14.592430 containerd[1607]: time="2025-11-04T23:57:14.592390780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:57:14.593696 kubelet[2812]: E1104 23:57:14.592952 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:57:14.593696 kubelet[2812]: E1104 23:57:14.593022 2812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:57:14.593696 kubelet[2812]: E1104 23:57:14.593201 2812 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m874f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547c989ccf-8nsrm_calico-system(196a06c5-2bf4-4f10-938e-eef198e9214f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:14.596724 kubelet[2812]: E1104 23:57:14.596647 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:57:14.604412 sshd[4963]: Accepted publickey for core from 139.178.68.195 port 60760 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:14.607103 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:14.622634 systemd-logind[1581]: New session 13 of user core. Nov 4 23:57:14.625576 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:57:14.983326 sshd[4966]: Connection closed by 139.178.68.195 port 60760 Nov 4 23:57:14.984191 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:14.993201 systemd[1]: sshd@14-10.128.0.112:22-139.178.68.195:60760.service: Deactivated successfully. Nov 4 23:57:14.999817 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:57:15.003389 systemd-logind[1581]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:57:15.006983 systemd-logind[1581]: Removed session 13. Nov 4 23:57:16.129670 sshd[4955]: Received disconnect from 103.181.143.104 port 37970:11: Bye Bye [preauth] Nov 4 23:57:16.129670 sshd[4955]: Disconnected from authenticating user root 103.181.143.104 port 37970 [preauth] Nov 4 23:57:16.134689 systemd[1]: sshd@13-10.128.0.112:22-103.181.143.104:37970.service: Deactivated successfully. Nov 4 23:57:18.426325 kubelet[2812]: E1104 23:57:18.424342 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:57:20.047191 systemd[1]: Started sshd@15-10.128.0.112:22-139.178.68.195:60768.service - OpenSSH per-connection server daemon (139.178.68.195:60768). Nov 4 23:57:20.370148 sshd[4980]: Accepted publickey for core from 139.178.68.195 port 60768 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:20.373833 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:20.386378 systemd-logind[1581]: New session 14 of user core. Nov 4 23:57:20.393772 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:57:20.724545 sshd[4984]: Connection closed by 139.178.68.195 port 60768 Nov 4 23:57:20.725592 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:20.738929 systemd-logind[1581]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:57:20.740050 systemd[1]: sshd@15-10.128.0.112:22-139.178.68.195:60768.service: Deactivated successfully. Nov 4 23:57:20.748497 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:57:20.754155 systemd-logind[1581]: Removed session 14. Nov 4 23:57:23.419765 kubelet[2812]: E1104 23:57:23.418841 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:57:23.423315 kubelet[2812]: E1104 23:57:23.422460 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:57:25.789073 systemd[1]: Started sshd@16-10.128.0.112:22-139.178.68.195:41050.service - OpenSSH per-connection server daemon (139.178.68.195:41050). Nov 4 23:57:26.135325 sshd[4998]: Accepted publickey for core from 139.178.68.195 port 41050 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:26.137705 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:26.146625 systemd-logind[1581]: New session 15 of user core. Nov 4 23:57:26.152524 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:57:26.417700 kubelet[2812]: E1104 23:57:26.416460 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:57:26.566350 sshd[5001]: Connection closed by 139.178.68.195 port 41050 Nov 4 23:57:26.567207 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:26.577240 systemd-logind[1581]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:57:26.577819 systemd[1]: sshd@16-10.128.0.112:22-139.178.68.195:41050.service: Deactivated successfully. Nov 4 23:57:26.585658 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:57:26.593092 systemd-logind[1581]: Removed session 15. Nov 4 23:57:26.624918 systemd[1]: Started sshd@17-10.128.0.112:22-139.178.68.195:41062.service - OpenSSH per-connection server daemon (139.178.68.195:41062). Nov 4 23:57:26.958418 sshd[5013]: Accepted publickey for core from 139.178.68.195 port 41062 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:26.961149 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:26.971002 systemd-logind[1581]: New session 16 of user core. Nov 4 23:57:26.978538 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:57:27.396302 sshd[5016]: Connection closed by 139.178.68.195 port 41062 Nov 4 23:57:27.397240 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:27.409450 systemd[1]: sshd@17-10.128.0.112:22-139.178.68.195:41062.service: Deactivated successfully. Nov 4 23:57:27.416557 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:57:27.426111 systemd-logind[1581]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:57:27.432627 systemd-logind[1581]: Removed session 16. Nov 4 23:57:27.440321 kubelet[2812]: E1104 23:57:27.439938 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:57:27.443654 kubelet[2812]: E1104 23:57:27.443602 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:57:27.458492 systemd[1]: Started sshd@18-10.128.0.112:22-139.178.68.195:41078.service - OpenSSH per-connection server daemon (139.178.68.195:41078). Nov 4 23:57:27.808417 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 41078 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:27.810838 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:27.819374 systemd-logind[1581]: New session 17 of user core. Nov 4 23:57:27.827559 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:57:29.083887 sshd[5029]: Connection closed by 139.178.68.195 port 41078 Nov 4 23:57:29.084403 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:29.094604 systemd-logind[1581]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:57:29.096918 systemd[1]: sshd@18-10.128.0.112:22-139.178.68.195:41078.service: Deactivated successfully. Nov 4 23:57:29.103150 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:57:29.108836 systemd-logind[1581]: Removed session 17. Nov 4 23:57:29.149741 systemd[1]: Started sshd@19-10.128.0.112:22-139.178.68.195:41088.service - OpenSSH per-connection server daemon (139.178.68.195:41088). Nov 4 23:57:29.418478 kubelet[2812]: E1104 23:57:29.417827 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:57:29.490685 sshd[5046]: Accepted publickey for core from 139.178.68.195 port 41088 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:29.493116 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:29.503829 systemd-logind[1581]: New session 18 of user core. Nov 4 23:57:29.509728 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:57:30.137177 sshd[5049]: Connection closed by 139.178.68.195 port 41088 Nov 4 23:57:30.138358 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:30.149130 systemd[1]: sshd@19-10.128.0.112:22-139.178.68.195:41088.service: Deactivated successfully. Nov 4 23:57:30.149632 systemd-logind[1581]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:57:30.156205 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:57:30.163149 systemd-logind[1581]: Removed session 18. Nov 4 23:57:30.193748 systemd[1]: Started sshd@20-10.128.0.112:22-139.178.68.195:41102.service - OpenSSH per-connection server daemon (139.178.68.195:41102). Nov 4 23:57:30.343139 containerd[1607]: time="2025-11-04T23:57:30.343072236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"542845ba1ad3a9d567af2c469d1d48243054ee6aa2467a41e6e02b98495eb063\" id:\"5d1d6617dac305d9ac64fa161abc3bfe38ea11960f8528b77f1b6084c013e7fc\" pid:5067 exited_at:{seconds:1762300650 nanos:342363009}" Nov 4 23:57:30.527133 sshd[5082]: Accepted publickey for core from 139.178.68.195 port 41102 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:30.530832 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:30.542846 systemd-logind[1581]: New session 19 of user core. Nov 4 23:57:30.548538 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:57:30.861335 sshd[5088]: Connection closed by 139.178.68.195 port 41102 Nov 4 23:57:30.863483 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:30.874055 systemd[1]: sshd@20-10.128.0.112:22-139.178.68.195:41102.service: Deactivated successfully. Nov 4 23:57:30.874745 systemd-logind[1581]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:57:30.880165 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:57:30.886842 systemd-logind[1581]: Removed session 19. Nov 4 23:57:35.927534 systemd[1]: Started sshd@21-10.128.0.112:22-139.178.68.195:47072.service - OpenSSH per-connection server daemon (139.178.68.195:47072). Nov 4 23:57:36.252794 sshd[5101]: Accepted publickey for core from 139.178.68.195 port 47072 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:36.254548 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:36.262194 systemd-logind[1581]: New session 20 of user core. Nov 4 23:57:36.269837 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:57:36.417443 kubelet[2812]: E1104 23:57:36.416685 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-bjqgd" podUID="15a4baa5-351d-4330-9bf0-0494048d0ffd" Nov 4 23:57:36.422069 kubelet[2812]: E1104 23:57:36.420456 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641" Nov 4 23:57:36.607591 sshd[5106]: Connection closed by 139.178.68.195 port 47072 Nov 4 23:57:36.608625 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:36.624118 systemd[1]: sshd@21-10.128.0.112:22-139.178.68.195:47072.service: Deactivated successfully. Nov 4 23:57:36.628616 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:57:36.631729 systemd-logind[1581]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:57:36.636431 systemd-logind[1581]: Removed session 20. Nov 4 23:57:39.224438 systemd[1]: Started sshd@22-10.128.0.112:22-199.195.254.215:57850.service - OpenSSH per-connection server daemon (199.195.254.215:57850). Nov 4 23:57:39.419559 kubelet[2812]: E1104 23:57:39.419497 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:57:39.423216 kubelet[2812]: E1104 23:57:39.423143 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58665d7946-76phh" podUID="78eeee54-6dd6-435f-b279-78592fdc8b44" Nov 4 23:57:41.667990 systemd[1]: Started sshd@23-10.128.0.112:22-139.178.68.195:47076.service - OpenSSH per-connection server daemon (139.178.68.195:47076). Nov 4 23:57:42.001549 sshd[5124]: Accepted publickey for core from 139.178.68.195 port 47076 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:42.003736 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:42.015712 systemd-logind[1581]: New session 21 of user core. Nov 4 23:57:42.023793 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:57:42.351153 sshd[5127]: Connection closed by 139.178.68.195 port 47076 Nov 4 23:57:42.351582 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:42.361239 systemd[1]: sshd@23-10.128.0.112:22-139.178.68.195:47076.service: Deactivated successfully. Nov 4 23:57:42.366169 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:57:42.370439 systemd-logind[1581]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:57:42.374214 systemd-logind[1581]: Removed session 21. Nov 4 23:57:42.416999 kubelet[2812]: E1104 23:57:42.416942 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x8fzw" podUID="fb4aa393-f03a-4f04-a545-20b10128cfa9" Nov 4 23:57:43.347376 sshd[5119]: Received disconnect from 199.195.254.215 port 57850:11: Bye Bye [preauth] Nov 4 23:57:43.347376 sshd[5119]: Disconnected from authenticating user root 199.195.254.215 port 57850 [preauth] Nov 4 23:57:43.352956 systemd[1]: sshd@22-10.128.0.112:22-199.195.254.215:57850.service: Deactivated successfully. Nov 4 23:57:44.415936 kubelet[2812]: E1104 23:57:44.415874 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd57c848d-tc66q" podUID="0cfb7e2a-0604-407b-ae48-6da4047f5d80" Nov 4 23:57:47.416329 systemd[1]: Started sshd@24-10.128.0.112:22-139.178.68.195:50970.service - OpenSSH per-connection server daemon (139.178.68.195:50970). Nov 4 23:57:47.778397 sshd[5143]: Accepted publickey for core from 139.178.68.195 port 50970 ssh2: RSA SHA256:BdS1FYOciP7gXJhQG04j4TXMl7SktPWimy49vErOTWs Nov 4 23:57:47.781325 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:47.794582 systemd-logind[1581]: New session 22 of user core. Nov 4 23:57:47.799545 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:57:48.121586 sshd[5146]: Connection closed by 139.178.68.195 port 50970 Nov 4 23:57:48.122428 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:48.134384 systemd-logind[1581]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:57:48.135118 systemd[1]: sshd@24-10.128.0.112:22-139.178.68.195:50970.service: Deactivated successfully. Nov 4 23:57:48.141716 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:57:48.146972 systemd-logind[1581]: Removed session 22. Nov 4 23:57:50.416359 kubelet[2812]: E1104 23:57:50.415880 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547c989ccf-8nsrm" podUID="196a06c5-2bf4-4f10-938e-eef198e9214f" Nov 4 23:57:50.418702 kubelet[2812]: E1104 23:57:50.418647 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4w4qr" podUID="ad1b068e-ec25-488d-b894-ad5a0b2e8641"