Nov 6 00:21:46.105348 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:21:46.105391 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:21:46.105414 kernel: BIOS-provided physical RAM map: Nov 6 00:21:46.105428 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 6 00:21:46.105442 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 6 00:21:46.105456 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 6 00:21:46.105474 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 6 00:21:46.105489 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 6 00:21:46.105507 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Nov 6 00:21:46.105521 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Nov 6 00:21:46.105536 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Nov 6 00:21:46.105551 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Nov 6 00:21:46.105565 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 6 00:21:46.105581 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 6 00:21:46.105603 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 6 00:21:46.105620 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 6 00:21:46.105636 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 6 00:21:46.105653 kernel: NX (Execute Disable) protection: active Nov 6 00:21:46.105669 kernel: APIC: Static calls initialized Nov 6 00:21:46.105686 kernel: efi: EFI v2.7 by EDK II Nov 6 00:21:46.105703 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Nov 6 00:21:46.105719 kernel: random: crng init done Nov 6 00:21:46.105736 kernel: secureboot: Secure boot disabled Nov 6 00:21:46.105751 kernel: SMBIOS 2.4 present. Nov 6 00:21:46.105766 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 6 00:21:46.105785 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:21:46.105801 kernel: Hypervisor detected: KVM Nov 6 00:21:46.105816 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 6 00:21:46.105829 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:21:46.105843 kernel: kvm-clock: using sched offset of 15481416466 cycles Nov 6 00:21:46.105858 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:21:46.105873 kernel: tsc: Detected 2299.998 MHz processor Nov 6 00:21:46.105887 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:21:46.105902 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:21:46.105918 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 6 00:21:46.105938 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 6 00:21:46.106001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:21:46.106017 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 6 00:21:46.106031 kernel: Using GB pages for direct mapping Nov 6 00:21:46.106047 kernel: ACPI: Early table checksum verification disabled Nov 6 00:21:46.106070 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 6 00:21:46.106101 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 6 00:21:46.106123 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 6 00:21:46.106140 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 6 00:21:46.106156 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 6 00:21:46.106173 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 6 00:21:46.106190 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 6 00:21:46.106207 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 6 00:21:46.106223 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 6 00:21:46.106243 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 6 00:21:46.106260 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 6 00:21:46.106276 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 6 00:21:46.106292 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 6 00:21:46.106309 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 6 00:21:46.106325 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 6 00:21:46.106341 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 6 00:21:46.106358 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 6 00:21:46.106374 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 6 00:21:46.106395 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 6 00:21:46.106412 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 6 00:21:46.106428 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 00:21:46.106445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 6 00:21:46.106462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 6 00:21:46.106479 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Nov 6 00:21:46.106498 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Nov 6 00:21:46.106514 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Nov 6 00:21:46.106530 kernel: Zone ranges: Nov 6 00:21:46.106552 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:21:46.106568 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 00:21:46.106585 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 6 00:21:46.106602 kernel: Device empty Nov 6 00:21:46.106619 kernel: Movable zone start for each node Nov 6 00:21:46.106636 kernel: Early memory node ranges Nov 6 00:21:46.106652 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 6 00:21:46.106669 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 6 00:21:46.106687 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Nov 6 00:21:46.106707 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Nov 6 00:21:46.106723 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 6 00:21:46.106740 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 6 00:21:46.106757 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 6 00:21:46.106774 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:21:46.106790 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 6 00:21:46.106807 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 6 00:21:46.106823 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Nov 6 00:21:46.106840 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 6 00:21:46.106861 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 6 00:21:46.106878 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 6 00:21:46.106894 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:21:46.106911 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:21:46.106927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:21:46.106944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:21:46.106970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:21:46.106987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:21:46.107004 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:21:46.107026 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:21:46.107042 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:21:46.107058 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:21:46.107075 kernel: CPU topo: Max. threads per core: 2 Nov 6 00:21:46.107107 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:21:46.107123 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:21:46.107140 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:21:46.107156 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 6 00:21:46.107183 kernel: Booting paravirtualized kernel on KVM Nov 6 00:21:46.107205 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:21:46.107222 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:21:46.107239 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:21:46.107256 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:21:46.107273 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:21:46.107289 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:21:46.107306 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:21:46.107323 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:21:46.107340 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 00:21:46.107361 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:21:46.107378 kernel: Fallback order for Node 0: 0 Nov 6 00:21:46.107395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Nov 6 00:21:46.107413 kernel: Policy zone: Normal Nov 6 00:21:46.107430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:21:46.107448 kernel: software IO TLB: area num 2. Nov 6 00:21:46.107480 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:21:46.107498 kernel: Kernel/User page tables isolation: enabled Nov 6 00:21:46.107516 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:21:46.107534 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:21:46.107550 kernel: Dynamic Preempt: voluntary Nov 6 00:21:46.107570 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:21:46.107590 kernel: rcu: RCU event tracing is enabled. Nov 6 00:21:46.107610 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:21:46.107630 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:21:46.107649 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:21:46.107671 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:21:46.107690 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:21:46.107709 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:21:46.107728 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:21:46.107747 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:21:46.107767 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:21:46.107787 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:21:46.107806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:21:46.107824 kernel: Console: colour dummy device 80x25 Nov 6 00:21:46.107847 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:21:46.107866 kernel: ACPI: Core revision 20240827 Nov 6 00:21:46.107885 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:21:46.107903 kernel: x2apic enabled Nov 6 00:21:46.107922 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:21:46.107965 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 6 00:21:46.107986 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 6 00:21:46.108007 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 6 00:21:46.108027 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 6 00:21:46.108049 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 6 00:21:46.108067 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:21:46.108101 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 6 00:21:46.108119 kernel: Spectre V2 : Mitigation: IBRS Nov 6 00:21:46.108138 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:21:46.108154 kernel: RETBleed: Mitigation: IBRS Nov 6 00:21:46.108172 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:21:46.108190 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 6 00:21:46.108207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:21:46.108230 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 00:21:46.108248 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:21:46.108266 kernel: active return thunk: its_return_thunk Nov 6 00:21:46.108284 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:21:46.108303 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:21:46.108322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:21:46.108340 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:21:46.108358 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:21:46.108377 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 00:21:46.108400 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:21:46.108419 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:21:46.108437 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:21:46.108456 kernel: landlock: Up and running. Nov 6 00:21:46.108474 kernel: SELinux: Initializing. Nov 6 00:21:46.108493 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:21:46.108512 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:21:46.108531 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 6 00:21:46.108549 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 6 00:21:46.108572 kernel: signal: max sigframe size: 1776 Nov 6 00:21:46.108590 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:21:46.108610 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:21:46.108628 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:21:46.108647 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:21:46.108665 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:21:46.108684 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:21:46.108703 kernel: .... node #0, CPUs: #1 Nov 6 00:21:46.108722 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 6 00:21:46.108745 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 6 00:21:46.108764 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:21:46.108782 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 6 00:21:46.108801 kernel: Memory: 7558108K/7860544K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 296860K reserved, 0K cma-reserved) Nov 6 00:21:46.108820 kernel: devtmpfs: initialized Nov 6 00:21:46.108839 kernel: x86/mm: Memory block size: 128MB Nov 6 00:21:46.108857 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 6 00:21:46.108876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:21:46.108897 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:21:46.108916 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:21:46.108934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:21:46.108959 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:21:46.108978 kernel: audit: type=2000 audit(1762388501.705:1): state=initialized audit_enabled=0 res=1 Nov 6 00:21:46.108997 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:21:46.109015 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:21:46.109034 kernel: cpuidle: using governor menu Nov 6 00:21:46.109052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:21:46.109074 kernel: dca service started, version 1.12.1 Nov 6 00:21:46.109106 kernel: PCI: Using configuration type 1 for base access Nov 6 00:21:46.109133 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:21:46.109149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:21:46.109165 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:21:46.109181 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:21:46.109197 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:21:46.109213 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:21:46.109229 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:21:46.109250 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:21:46.109266 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 6 00:21:46.109283 kernel: ACPI: Interpreter enabled Nov 6 00:21:46.109301 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:21:46.109319 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:21:46.109337 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:21:46.109353 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 00:21:46.109370 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 6 00:21:46.109389 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:21:46.109638 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:21:46.109827 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 00:21:46.110022 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 00:21:46.110046 kernel: PCI host bridge to bus 0000:00 Nov 6 00:21:46.110245 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:21:46.110417 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:21:46.110599 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:21:46.110767 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 6 00:21:46.110932 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:21:46.111175 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:21:46.111388 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:21:46.111928 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 6 00:21:46.112166 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 6 00:21:46.112368 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Nov 6 00:21:46.112550 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 6 00:21:46.112730 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Nov 6 00:21:46.112930 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:21:46.113647 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Nov 6 00:21:46.113864 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Nov 6 00:21:46.114085 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:21:46.114350 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Nov 6 00:21:46.114549 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Nov 6 00:21:46.114573 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:21:46.114594 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:21:46.114612 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:21:46.114628 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:21:46.114645 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 00:21:46.115157 kernel: iommu: Default domain type: Translated Nov 6 00:21:46.115175 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:21:46.115192 kernel: efivars: Registered efivars operations Nov 6 00:21:46.115208 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:21:46.115225 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:21:46.115241 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 6 00:21:46.115257 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 6 00:21:46.115273 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Nov 6 00:21:46.115289 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 6 00:21:46.115310 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 6 00:21:46.115328 kernel: vgaarb: loaded Nov 6 00:21:46.115345 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:21:46.115362 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:21:46.115379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:21:46.115397 kernel: pnp: PnP ACPI init Nov 6 00:21:46.115415 kernel: pnp: PnP ACPI: found 7 devices Nov 6 00:21:46.115434 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:21:46.115451 kernel: NET: Registered PF_INET protocol family Nov 6 00:21:46.115473 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:21:46.115492 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 00:21:46.115511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:21:46.115529 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:21:46.115547 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 00:21:46.115563 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 00:21:46.115580 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:21:46.115598 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:21:46.115616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:21:46.115639 kernel: NET: Registered PF_XDP protocol family Nov 6 00:21:46.116171 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:21:46.116356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:21:46.116523 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:21:46.116687 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 6 00:21:46.116882 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 00:21:46.116908 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:21:46.116933 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 00:21:46.116960 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 6 00:21:46.116979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 00:21:46.116998 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 6 00:21:46.117017 kernel: clocksource: Switched to clocksource tsc Nov 6 00:21:46.117035 kernel: Initialise system trusted keyrings Nov 6 00:21:46.117054 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 00:21:46.117073 kernel: Key type asymmetric registered Nov 6 00:21:46.118121 kernel: Asymmetric key parser 'x509' registered Nov 6 00:21:46.118149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:21:46.118167 kernel: io scheduler mq-deadline registered Nov 6 00:21:46.118184 kernel: io scheduler kyber registered Nov 6 00:21:46.118202 kernel: io scheduler bfq registered Nov 6 00:21:46.118220 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:21:46.118239 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 00:21:46.118453 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 6 00:21:46.118479 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 6 00:21:46.118667 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 6 00:21:46.118696 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 00:21:46.118881 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 6 00:21:46.118906 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:21:46.118924 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:21:46.118942 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 00:21:46.118970 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 6 00:21:46.118989 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 6 00:21:46.119220 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 6 00:21:46.119253 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:21:46.119272 kernel: i8042: Warning: Keylock active Nov 6 00:21:46.119291 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:21:46.119310 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:21:46.119502 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 6 00:21:46.119679 kernel: rtc_cmos 00:00: registered as rtc0 Nov 6 00:21:46.119854 kernel: rtc_cmos 00:00: setting system clock to 2025-11-06T00:21:45 UTC (1762388505) Nov 6 00:21:46.120040 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 6 00:21:46.120064 kernel: intel_pstate: CPU model not supported Nov 6 00:21:46.120083 kernel: pstore: Using crash dump compression: deflate Nov 6 00:21:46.121145 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:21:46.121171 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:21:46.121189 kernel: Segment Routing with IPv6 Nov 6 00:21:46.121207 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:21:46.121223 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:21:46.121239 kernel: Key type dns_resolver registered Nov 6 00:21:46.121264 kernel: IPI shorthand broadcast: enabled Nov 6 00:21:46.121282 kernel: sched_clock: Marking stable (3582003992, 533509322)->(4651216375, -535703061) Nov 6 00:21:46.121299 kernel: registered taskstats version 1 Nov 6 00:21:46.121316 kernel: Loading compiled-in X.509 certificates Nov 6 00:21:46.121331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:21:46.121348 kernel: Demotion targets for Node 0: null Nov 6 00:21:46.121365 kernel: Key type .fscrypt registered Nov 6 00:21:46.121383 kernel: Key type fscrypt-provisioning registered Nov 6 00:21:46.121401 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:21:46.121423 kernel: ima: No architecture policies found Nov 6 00:21:46.121441 kernel: clk: Disabling unused clocks Nov 6 00:21:46.121460 kernel: Warning: unable to open an initial console. Nov 6 00:21:46.121478 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:21:46.121497 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:21:46.121515 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 6 00:21:46.121533 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:21:46.121551 kernel: Run /init as init process Nov 6 00:21:46.121569 kernel: with arguments: Nov 6 00:21:46.121590 kernel: /init Nov 6 00:21:46.121608 kernel: with environment: Nov 6 00:21:46.121625 kernel: HOME=/ Nov 6 00:21:46.121643 kernel: TERM=linux Nov 6 00:21:46.121662 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:21:46.121685 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:21:46.121705 systemd[1]: Detected virtualization google. Nov 6 00:21:46.121723 systemd[1]: Detected architecture x86-64. Nov 6 00:21:46.121746 systemd[1]: Running in initrd. Nov 6 00:21:46.121764 systemd[1]: No hostname configured, using default hostname. Nov 6 00:21:46.121785 systemd[1]: Hostname set to . Nov 6 00:21:46.121804 systemd[1]: Initializing machine ID from random generator. Nov 6 00:21:46.121823 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:21:46.121843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:21:46.121883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:21:46.121907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:21:46.121926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:21:46.121943 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:21:46.121981 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:21:46.122002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:21:46.122025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:21:46.122043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:21:46.122061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:21:46.122078 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:21:46.125148 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:21:46.125170 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:21:46.125191 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:21:46.125212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:21:46.125233 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:21:46.125739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:21:46.125761 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:21:46.125783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:21:46.125803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:21:46.125824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:21:46.125845 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:21:46.125866 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:21:46.125886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:21:46.125911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:21:46.125933 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:21:46.125961 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:21:46.125982 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:21:46.126003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:21:46.126024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:46.126084 systemd-journald[192]: Collecting audit messages is disabled. Nov 6 00:21:46.126155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:21:46.126178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:21:46.126202 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:21:46.126224 systemd-journald[192]: Journal started Nov 6 00:21:46.126267 systemd-journald[192]: Runtime Journal (/run/log/journal/d7a6dfc560204da79170e9d7fb30c8a7) is 8M, max 148.6M, 140.6M free. Nov 6 00:21:46.130525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:21:46.130569 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:21:46.141073 systemd-modules-load[193]: Inserted module 'overlay' Nov 6 00:21:46.147206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:21:46.158139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:46.166874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:21:46.176479 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:21:46.189119 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:21:46.197161 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:21:46.194253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:21:46.200118 kernel: Bridge firewalling registered Nov 6 00:21:46.201160 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 6 00:21:46.206544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:21:46.210514 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:21:46.218737 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:21:46.223434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:21:46.230488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:21:46.235283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:21:46.246500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:21:46.255233 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:21:46.268740 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:21:46.325777 systemd-resolved[237]: Positive Trust Anchors: Nov 6 00:21:46.326218 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:21:46.326287 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:21:46.331040 systemd-resolved[237]: Defaulting to hostname 'linux'. Nov 6 00:21:46.335848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:21:46.345354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:21:46.398142 kernel: SCSI subsystem initialized Nov 6 00:21:46.411124 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:21:46.423157 kernel: iscsi: registered transport (tcp) Nov 6 00:21:46.448468 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:21:46.448549 kernel: QLogic iSCSI HBA Driver Nov 6 00:21:46.471986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:21:46.491235 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:21:46.500072 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:21:46.561568 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:21:46.563791 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:21:46.623149 kernel: raid6: avx2x4 gen() 17975 MB/s Nov 6 00:21:46.640152 kernel: raid6: avx2x2 gen() 18104 MB/s Nov 6 00:21:46.657535 kernel: raid6: avx2x1 gen() 13930 MB/s Nov 6 00:21:46.657586 kernel: raid6: using algorithm avx2x2 gen() 18104 MB/s Nov 6 00:21:46.675737 kernel: raid6: .... xor() 18774 MB/s, rmw enabled Nov 6 00:21:46.675804 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:21:46.699135 kernel: xor: automatically using best checksumming function avx Nov 6 00:21:46.883137 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:21:46.891427 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:21:46.895077 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:21:46.929198 systemd-udevd[443]: Using default interface naming scheme 'v255'. Nov 6 00:21:46.938050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:21:46.942022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:21:46.977758 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Nov 6 00:21:47.011397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:21:47.013426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:21:47.105084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:21:47.112076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:21:47.211554 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Nov 6 00:21:47.220382 kernel: scsi host0: Virtio SCSI HBA Nov 6 00:21:47.226119 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 6 00:21:47.234122 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:21:47.285490 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 00:21:47.288747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:21:47.289302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:47.319466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:47.330574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:47.333706 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:21:47.345114 kernel: AES CTR mode by8 optimization enabled Nov 6 00:21:47.348983 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 6 00:21:47.349351 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 6 00:21:47.349576 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 6 00:21:47.349832 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 6 00:21:47.350071 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 6 00:21:47.366735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:21:47.366790 kernel: GPT:17805311 != 33554431 Nov 6 00:21:47.366816 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:21:47.366859 kernel: GPT:17805311 != 33554431 Nov 6 00:21:47.366884 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:21:47.366906 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:21:47.372175 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 6 00:21:47.406340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:47.476865 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 6 00:21:47.483478 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:21:47.513885 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 6 00:21:47.527681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 6 00:21:47.539330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 6 00:21:47.539584 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 6 00:21:47.544492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:21:47.549388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:21:47.554389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:21:47.560513 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:21:47.574267 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:21:47.584383 disk-uuid[597]: Primary Header is updated. Nov 6 00:21:47.584383 disk-uuid[597]: Secondary Entries is updated. Nov 6 00:21:47.584383 disk-uuid[597]: Secondary Header is updated. Nov 6 00:21:47.601128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:21:47.604336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:21:47.618136 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:21:48.632129 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:21:48.636163 disk-uuid[598]: The operation has completed successfully. Nov 6 00:21:48.713930 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:21:48.714115 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:21:48.767342 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:21:48.786374 sh[619]: Success Nov 6 00:21:48.809577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:21:48.810146 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:21:48.810198 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:21:48.822129 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 6 00:21:48.894116 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:21:48.899206 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:21:48.911523 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:21:48.927127 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (631) Nov 6 00:21:48.929809 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:21:48.929860 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:21:48.951904 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:21:48.951972 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:21:48.951999 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:21:48.958102 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:21:48.959491 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:21:48.962549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:21:48.964781 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:21:48.973272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:21:49.016128 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (664) Nov 6 00:21:49.019073 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:21:49.019155 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:21:49.027896 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:21:49.027965 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:21:49.027991 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:21:49.034144 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:21:49.035783 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:21:49.042765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:21:49.161158 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:21:49.169282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:21:49.292490 systemd-networkd[800]: lo: Link UP Nov 6 00:21:49.292995 systemd-networkd[800]: lo: Gained carrier Nov 6 00:21:49.296625 systemd-networkd[800]: Enumeration completed Nov 6 00:21:49.296791 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:21:49.297583 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:21:49.297590 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:21:49.307600 ignition[721]: Ignition 2.22.0 Nov 6 00:21:49.302348 systemd-networkd[800]: eth0: Link UP Nov 6 00:21:49.307611 ignition[721]: Stage: fetch-offline Nov 6 00:21:49.303108 systemd-networkd[800]: eth0: Gained carrier Nov 6 00:21:49.307667 ignition[721]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:49.303128 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:21:49.307681 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:49.316188 systemd-networkd[800]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e.c.flatcar-212911.internal' to 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:21:49.307813 ignition[721]: parsed url from cmdline: "" Nov 6 00:21:49.316206 systemd-networkd[800]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 6 00:21:49.307819 ignition[721]: no config URL provided Nov 6 00:21:49.318321 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:21:49.307829 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:21:49.323795 systemd[1]: Reached target network.target - Network. Nov 6 00:21:49.307841 ignition[721]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:21:49.329574 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:21:49.307851 ignition[721]: failed to fetch config: resource requires networking Nov 6 00:21:49.308078 ignition[721]: Ignition finished successfully Nov 6 00:21:49.385210 ignition[809]: Ignition 2.22.0 Nov 6 00:21:49.385227 ignition[809]: Stage: fetch Nov 6 00:21:49.385430 ignition[809]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:49.385448 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:49.385587 ignition[809]: parsed url from cmdline: "" Nov 6 00:21:49.385595 ignition[809]: no config URL provided Nov 6 00:21:49.385615 ignition[809]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:21:49.398083 unknown[809]: fetched base config from "system" Nov 6 00:21:49.385630 ignition[809]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:21:49.398224 unknown[809]: fetched base config from "system" Nov 6 00:21:49.385681 ignition[809]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 6 00:21:49.398231 unknown[809]: fetched user config from "gcp" Nov 6 00:21:49.389594 ignition[809]: GET result: OK Nov 6 00:21:49.401429 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:21:49.389668 ignition[809]: parsing config with SHA512: a431936dec4e3126db77eb89a697046a7f1a13c10179bcc1dffbda58befa1adc919cc6ac3410b2d41767dc1408f5d0bd0316649ae728c02874bd8fdd5a90a5fe Nov 6 00:21:49.405131 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:21:49.398710 ignition[809]: fetch: fetch complete Nov 6 00:21:49.398716 ignition[809]: fetch: fetch passed Nov 6 00:21:49.398767 ignition[809]: Ignition finished successfully Nov 6 00:21:49.447314 ignition[815]: Ignition 2.22.0 Nov 6 00:21:49.447331 ignition[815]: Stage: kargs Nov 6 00:21:49.447540 ignition[815]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:49.451110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:21:49.447557 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:49.456965 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:21:49.449210 ignition[815]: kargs: kargs passed Nov 6 00:21:49.449265 ignition[815]: Ignition finished successfully Nov 6 00:21:49.499680 ignition[822]: Ignition 2.22.0 Nov 6 00:21:49.499698 ignition[822]: Stage: disks Nov 6 00:21:49.499923 ignition[822]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:49.503063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:21:49.499943 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:49.511040 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:21:49.501409 ignition[822]: disks: disks passed Nov 6 00:21:49.516265 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:21:49.501466 ignition[822]: Ignition finished successfully Nov 6 00:21:49.520208 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:21:49.524236 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:21:49.528193 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:21:49.533740 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:21:49.574430 systemd-fsck[831]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 6 00:21:49.586990 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:21:49.593249 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:21:49.769143 kernel: EXT4-fs (sda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:21:49.770085 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:21:49.774285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:21:49.779718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:21:49.798067 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:21:49.804854 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:21:49.804934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:21:49.820286 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (839) Nov 6 00:21:49.820335 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:21:49.820361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:21:49.804973 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:21:49.829250 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:21:49.829290 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:21:49.829314 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:21:49.819406 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:21:49.826158 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:21:49.835155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:21:49.946150 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:21:49.955744 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:21:49.963018 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:21:49.969288 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:21:50.116710 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:21:50.119746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:21:50.135284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:21:50.147749 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:21:50.149389 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:21:50.195905 ignition[951]: INFO : Ignition 2.22.0 Nov 6 00:21:50.195905 ignition[951]: INFO : Stage: mount Nov 6 00:21:50.195905 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:50.195905 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:50.216214 ignition[951]: INFO : mount: mount passed Nov 6 00:21:50.216214 ignition[951]: INFO : Ignition finished successfully Nov 6 00:21:50.195917 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:21:50.200536 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:21:50.206268 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:21:50.232704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:21:50.264115 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (963) Nov 6 00:21:50.267131 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:21:50.267190 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:21:50.273897 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:21:50.273959 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:21:50.273982 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:21:50.276549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:21:50.317660 ignition[980]: INFO : Ignition 2.22.0 Nov 6 00:21:50.317660 ignition[980]: INFO : Stage: files Nov 6 00:21:50.324234 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:50.324234 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:50.324234 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:21:50.324234 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:21:50.324234 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:21:50.338982 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:21:50.338982 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:21:50.338982 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:21:50.338982 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:21:50.338982 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:21:50.326700 unknown[980]: wrote ssh authorized keys file for user: core Nov 6 00:21:50.480768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:21:50.745907 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:21:50.751261 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:21:50.796182 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:21:50.796182 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:21:50.796182 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:21:50.883348 systemd-networkd[800]: eth0: Gained IPv6LL Nov 6 00:21:51.285484 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:21:52.159104 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:21:52.159104 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:21:52.168265 ignition[980]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:21:52.168265 ignition[980]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:21:52.168265 ignition[980]: INFO : files: files passed Nov 6 00:21:52.168265 ignition[980]: INFO : Ignition finished successfully Nov 6 00:21:52.168326 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:21:52.175690 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:21:52.184020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:21:52.197276 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:21:52.197650 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:21:52.222544 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:21:52.228269 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:21:52.227462 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:21:52.238246 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:21:52.229059 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:21:52.235465 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:21:52.307728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:21:52.307915 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:21:52.312911 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:21:52.315401 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:21:52.319567 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:21:52.321664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:21:52.364347 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:21:52.367186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:21:52.397259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:21:52.401553 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:21:52.404635 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:21:52.408622 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:21:52.409063 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:21:52.416529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:21:52.419816 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:21:52.423670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:21:52.427660 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:21:52.431666 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:21:52.435655 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:21:52.439618 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:21:52.443837 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:21:52.447655 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:21:52.452787 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:21:52.456674 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:21:52.460436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:21:52.460863 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:21:52.471223 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:21:52.471689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:21:52.475445 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:21:52.475854 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:21:52.479600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:21:52.480020 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:21:52.487488 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:21:52.488037 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:21:52.490641 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:21:52.491049 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:21:52.496284 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:21:52.507204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:21:52.507582 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:21:52.521352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:21:52.524279 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:21:52.524556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:21:52.531514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:21:52.531850 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:21:52.547261 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:21:52.547446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:21:52.561663 ignition[1034]: INFO : Ignition 2.22.0 Nov 6 00:21:52.561663 ignition[1034]: INFO : Stage: umount Nov 6 00:21:52.567210 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:21:52.567210 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 00:21:52.567210 ignition[1034]: INFO : umount: umount passed Nov 6 00:21:52.567210 ignition[1034]: INFO : Ignition finished successfully Nov 6 00:21:52.564316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:21:52.566586 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:21:52.566747 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:21:52.571777 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:21:52.571919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:21:52.577353 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:21:52.577420 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:21:52.579414 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:21:52.579576 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:21:52.583397 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:21:52.583563 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:21:52.590354 systemd[1]: Stopped target network.target - Network. Nov 6 00:21:52.593345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:21:52.593511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:21:52.597380 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:21:52.601338 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:21:52.601543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:21:52.605342 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:21:52.609341 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:21:52.613422 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:21:52.613586 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:21:52.617393 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:21:52.617554 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:21:52.621385 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:21:52.621561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:21:52.625390 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:21:52.625559 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:21:52.629388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:21:52.629567 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:21:52.633788 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:21:52.638613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:21:52.645287 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:21:52.645445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:21:52.653064 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:21:52.653543 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:21:52.653700 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:21:52.660945 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:21:52.661727 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:21:52.662799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:21:52.662848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:21:52.667903 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:21:52.673378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:21:52.673444 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:21:52.681433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:21:52.681596 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:21:52.692411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:21:52.692496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:21:52.698217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:21:52.698299 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:21:52.704514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:21:52.720776 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:21:52.720855 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:21:52.723665 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:21:52.723927 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:21:52.734109 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:21:52.734236 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:21:52.737253 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:21:52.737310 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:21:52.740211 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:21:52.740421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:21:52.747500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:21:52.747566 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:21:52.756173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:21:52.756261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:21:52.764385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:21:52.772384 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:21:52.772456 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:21:52.783702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:21:52.783788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:21:52.794616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:21:52.794793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:52.802021 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:21:52.802083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:21:52.802173 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:21:52.802694 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:21:52.802805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:21:52.875211 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 6 00:21:52.805024 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:21:52.805289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:21:52.810499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:21:52.814535 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:21:52.840448 systemd[1]: Switching root. Nov 6 00:21:52.889166 systemd-journald[192]: Journal stopped Nov 6 00:21:54.870868 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:21:54.870917 kernel: SELinux: policy capability open_perms=1 Nov 6 00:21:54.870948 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:21:54.870968 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:21:54.870987 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:21:54.871006 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:21:54.871028 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:21:54.871048 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:21:54.871071 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:21:54.871110 kernel: audit: type=1403 audit(1762388513.478:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:21:54.871139 systemd[1]: Successfully loaded SELinux policy in 65.872ms. Nov 6 00:21:54.871161 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.002ms. Nov 6 00:21:54.871182 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:21:54.871202 systemd[1]: Detected virtualization google. Nov 6 00:21:54.871228 systemd[1]: Detected architecture x86-64. Nov 6 00:21:54.871247 systemd[1]: Detected first boot. Nov 6 00:21:54.871267 systemd[1]: Initializing machine ID from random generator. Nov 6 00:21:54.871289 zram_generator::config[1078]: No configuration found. Nov 6 00:21:54.871311 kernel: Guest personality initialized and is inactive Nov 6 00:21:54.871334 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:21:54.871353 kernel: Initialized host personality Nov 6 00:21:54.871372 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:21:54.871394 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:21:54.871417 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:21:54.871437 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:21:54.871457 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:21:54.871479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:21:54.871506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:21:54.871529 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:21:54.871553 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:21:54.871576 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:21:54.871599 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:21:54.871622 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:21:54.871644 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:21:54.871670 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:21:54.871692 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:21:54.871711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:21:54.871731 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:21:54.871752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:21:54.871774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:21:54.871802 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:21:54.871824 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:21:54.871846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:21:54.871871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:21:54.871892 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:21:54.871913 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:21:54.871935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:21:54.871957 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:21:54.871979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:21:54.872001 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:21:54.872026 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:21:54.872048 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:21:54.872070 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:21:54.873131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:21:54.873166 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:21:54.873194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:21:54.873225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:21:54.873249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:21:54.873274 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:21:54.873297 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:21:54.873322 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:21:54.873345 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:21:54.873369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:54.873398 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:21:54.873423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:21:54.873448 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:21:54.873474 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:21:54.873499 systemd[1]: Reached target machines.target - Containers. Nov 6 00:21:54.873523 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:21:54.873547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:54.873569 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:21:54.873599 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:21:54.873622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:21:54.873648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:21:54.873672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:21:54.873695 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:21:54.873720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:21:54.873744 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:21:54.873768 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:21:54.873793 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:21:54.873821 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:21:54.873846 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:21:54.873871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:54.873895 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:21:54.873919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:21:54.873943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:21:54.873968 kernel: loop: module loaded Nov 6 00:21:54.873991 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:21:54.874019 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:21:54.874043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:21:54.874067 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:21:54.874117 systemd[1]: Stopped verity-setup.service. Nov 6 00:21:54.874147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:54.874169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:21:54.874329 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:21:54.874354 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:21:54.874384 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:21:54.874542 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:21:54.874567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:21:54.874591 kernel: fuse: init (API version 7.41) Nov 6 00:21:54.874750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:21:54.874777 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:21:54.874800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:21:54.874957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:21:54.874983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:21:54.875012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:21:54.876930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:21:54.876963 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:21:54.876989 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:21:54.877014 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:21:54.877039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:21:54.877064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:21:54.877105 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:21:54.877143 kernel: ACPI: bus type drm_connector registered Nov 6 00:21:54.877164 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:21:54.877184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:21:54.877244 systemd-journald[1152]: Collecting audit messages is disabled. Nov 6 00:21:54.877304 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:21:54.877340 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:21:54.877369 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:21:54.877395 systemd-journald[1152]: Journal started Nov 6 00:21:54.877435 systemd-journald[1152]: Runtime Journal (/run/log/journal/47154580d2c84cbb8dcd7d57016b093b) is 8M, max 148.6M, 140.6M free. Nov 6 00:21:54.349490 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:21:54.368825 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 00:21:54.369436 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:21:54.883183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:21:54.890117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:21:54.896263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:21:54.896331 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:21:54.906118 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:21:54.915207 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:21:54.918113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:54.927116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:21:54.934122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:21:54.939112 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:21:54.945164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:21:54.949126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:21:54.959348 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:21:54.969267 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:21:54.976293 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:21:54.982960 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:21:54.983836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:21:54.991798 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:21:55.018491 kernel: loop0: detected capacity change from 0 to 229808 Nov 6 00:21:55.043850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:21:55.057415 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:21:55.064950 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:21:55.074386 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:21:55.076763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:21:55.095126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:21:55.128648 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:21:55.135752 systemd-journald[1152]: Time spent on flushing to /var/log/journal/47154580d2c84cbb8dcd7d57016b093b is 86.573ms for 967 entries. Nov 6 00:21:55.135752 systemd-journald[1152]: System Journal (/var/log/journal/47154580d2c84cbb8dcd7d57016b093b) is 8M, max 584.8M, 576.8M free. Nov 6 00:21:55.251646 systemd-journald[1152]: Received client request to flush runtime journal. Nov 6 00:21:55.251810 kernel: loop1: detected capacity change from 0 to 128016 Nov 6 00:21:55.251869 kernel: loop2: detected capacity change from 0 to 110984 Nov 6 00:21:55.152865 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:21:55.255372 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:21:55.263565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:21:55.269085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:21:55.303118 kernel: loop3: detected capacity change from 0 to 50736 Nov 6 00:21:55.322580 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 6 00:21:55.323800 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 6 00:21:55.334800 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:21:55.373281 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:21:55.384139 kernel: loop4: detected capacity change from 0 to 229808 Nov 6 00:21:55.417132 kernel: loop5: detected capacity change from 0 to 128016 Nov 6 00:21:55.450873 kernel: loop6: detected capacity change from 0 to 110984 Nov 6 00:21:55.495128 kernel: loop7: detected capacity change from 0 to 50736 Nov 6 00:21:55.527673 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 6 00:21:55.531273 (sd-merge)[1223]: Merged extensions into '/usr'. Nov 6 00:21:55.541865 systemd[1]: Reload requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:21:55.542038 systemd[1]: Reloading... Nov 6 00:21:55.743274 zram_generator::config[1249]: No configuration found. Nov 6 00:21:56.122846 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:21:56.258646 systemd[1]: Reloading finished in 714 ms. Nov 6 00:21:56.274246 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:21:56.279815 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:21:56.294284 systemd[1]: Starting ensure-sysext.service... Nov 6 00:21:56.299420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:21:56.334243 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:21:56.334264 systemd[1]: Reloading... Nov 6 00:21:56.359171 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:21:56.363231 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:21:56.363911 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:21:56.367339 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:21:56.375206 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:21:56.375783 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Nov 6 00:21:56.375903 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Nov 6 00:21:56.387687 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:21:56.388163 systemd-tmpfiles[1290]: Skipping /boot Nov 6 00:21:56.416483 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:21:56.416511 systemd-tmpfiles[1290]: Skipping /boot Nov 6 00:21:56.466242 zram_generator::config[1317]: No configuration found. Nov 6 00:21:56.704704 systemd[1]: Reloading finished in 369 ms. Nov 6 00:21:56.727615 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:21:56.742714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:21:56.755979 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:21:56.762823 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:21:56.769140 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:21:56.778461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:21:56.784478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:21:56.791339 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:21:56.801274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.801617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:56.805225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:21:56.816482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:21:56.822492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:21:56.825423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:56.826156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:56.826336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.833886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.836018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:56.836333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:56.836836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:56.845498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:21:56.848944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.861817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.862554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:56.867351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:21:56.874060 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 6 00:21:56.875765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:56.875987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:56.876306 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:21:56.879352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:56.885630 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Nov 6 00:21:56.893337 systemd[1]: Finished ensure-sysext.service. Nov 6 00:21:56.899017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:21:56.900228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:21:56.926656 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:21:56.946401 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:21:56.947559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:21:56.977588 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:21:56.978333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:21:56.983556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:21:56.992418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:21:56.994358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:21:56.997621 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:21:57.007781 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 6 00:21:57.010365 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:21:57.019836 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 6 00:21:57.023539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:21:57.029445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:21:57.033197 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:21:57.043819 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:21:57.049302 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:21:57.055855 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:21:57.085466 augenrules[1416]: No rules Nov 6 00:21:57.088426 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:21:57.095711 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:21:57.097207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:21:57.176349 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 6 00:21:57.336223 systemd-networkd[1405]: lo: Link UP Nov 6 00:21:57.336238 systemd-networkd[1405]: lo: Gained carrier Nov 6 00:21:57.337222 systemd-networkd[1405]: Enumeration completed Nov 6 00:21:57.337464 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:21:57.344460 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:21:57.347715 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:21:57.374404 systemd-resolved[1362]: Positive Trust Anchors: Nov 6 00:21:57.374437 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:21:57.374498 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:21:57.381776 systemd-resolved[1362]: Defaulting to hostname 'linux'. Nov 6 00:21:57.384667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:21:57.387336 systemd[1]: Reached target network.target - Network. Nov 6 00:21:57.390220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:21:57.393234 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:21:57.397386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:21:57.400290 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:21:57.403231 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:21:57.406447 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:21:57.417138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:21:57.427266 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:21:57.437295 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:21:57.437349 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:21:57.445269 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:21:57.456839 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:21:57.471296 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:21:57.484870 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:21:57.496530 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:21:57.507248 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:21:57.526186 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:21:57.536973 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:21:57.549918 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:21:57.561528 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:21:57.575174 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Nov 6 00:21:57.575996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:21:57.594125 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:21:57.622119 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 6 00:21:57.630695 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:21:57.640235 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:21:57.655294 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:21:57.655378 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 6 00:21:57.663180 kernel: ACPI: button: Sleep Button [SLPF] Nov 6 00:21:57.675112 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 6 00:21:57.680307 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Nov 6 00:21:57.689265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:21:57.689314 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:21:57.692345 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:21:57.696886 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:21:57.697487 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:21:57.699627 systemd-networkd[1405]: eth0: Link UP Nov 6 00:21:57.700796 systemd-networkd[1405]: eth0: Gained carrier Nov 6 00:21:57.700835 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:21:57.709664 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:21:57.715153 systemd-networkd[1405]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e.c.flatcar-212911.internal' to 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:21:57.715181 systemd-networkd[1405]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 6 00:21:57.722422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:21:57.733122 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:21:57.736696 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:21:57.750926 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:21:57.766405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:21:57.775242 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:21:57.788439 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:21:57.809467 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:21:57.822408 systemd[1]: Started ntpd.service - Network Time Service. Nov 6 00:21:57.834300 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:21:57.847030 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:21:57.859621 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:21:57.860519 oslogin_cache_refresh[1486]: Refreshing passwd entry cache Nov 6 00:21:57.862679 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing passwd entry cache Nov 6 00:21:57.875181 jq[1478]: false Nov 6 00:21:57.886675 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:21:57.900912 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting users, quitting Nov 6 00:21:57.900912 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:21:57.900912 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing group entry cache Nov 6 00:21:57.897321 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 6 00:21:57.895031 oslogin_cache_refresh[1486]: Failure getting users, quitting Nov 6 00:21:57.900197 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:21:57.895077 oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:21:57.895185 oslogin_cache_refresh[1486]: Refreshing group entry cache Nov 6 00:21:57.906235 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting groups, quitting Nov 6 00:21:57.906235 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:21:57.905409 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:21:57.904352 oslogin_cache_refresh[1486]: Failure getting groups, quitting Nov 6 00:21:57.904374 oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:21:57.917167 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:21:57.935535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:21:57.946895 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:21:57.952425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:21:57.952953 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:21:57.954362 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:21:57.963073 jq[1508]: true Nov 6 00:21:57.973658 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:21:57.975195 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:21:58.009708 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:21:58.025784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 6 00:21:58.053639 update_engine[1506]: I20251106 00:21:58.046334 1506 main.cc:92] Flatcar Update Engine starting Nov 6 00:21:58.058486 coreos-metadata[1473]: Nov 06 00:21:58.056 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 6 00:21:58.058360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:21:58.061420 coreos-metadata[1473]: Nov 06 00:21:58.059 INFO Fetch successful Nov 6 00:21:58.061420 coreos-metadata[1473]: Nov 06 00:21:58.059 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 6 00:21:58.064527 coreos-metadata[1473]: Nov 06 00:21:58.064 INFO Fetch successful Nov 6 00:21:58.064527 coreos-metadata[1473]: Nov 06 00:21:58.064 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 6 00:21:58.064986 coreos-metadata[1473]: Nov 06 00:21:58.064 INFO Fetch successful Nov 6 00:21:58.064986 coreos-metadata[1473]: Nov 06 00:21:58.064 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 6 00:21:58.072718 coreos-metadata[1473]: Nov 06 00:21:58.068 INFO Fetch successful Nov 6 00:21:58.096244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:58.101084 extend-filesystems[1484]: Found /dev/sda6 Nov 6 00:21:58.122840 extend-filesystems[1484]: Found /dev/sda9 Nov 6 00:21:58.132169 tar[1516]: linux-amd64/LICENSE Nov 6 00:21:58.132169 tar[1516]: linux-amd64/helm Nov 6 00:21:58.143252 extend-filesystems[1484]: Checking size of /dev/sda9 Nov 6 00:21:58.143577 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:21:58.143912 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:21:58.162375 jq[1517]: true Nov 6 00:21:58.242533 extend-filesystems[1484]: Resized partition /dev/sda9 Nov 6 00:21:58.270715 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:21:58.271713 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:21:58.272998 extend-filesystems[1550]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:21:58.307643 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 6 00:21:58.340933 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:21:58.444163 dbus-daemon[1474]: [system] SELinux support is enabled Nov 6 00:21:58.444424 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:21:58.498313 update_engine[1506]: I20251106 00:21:58.475380 1506 update_check_scheduler.cc:74] Next update check in 9m19s Nov 6 00:21:58.468000 dbus-daemon[1474]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1405 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 6 00:21:58.451848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:21:58.552391 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 6 00:21:58.451897 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:21:58.452050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:21:58.452072 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:21:58.498900 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 6 00:21:58.500817 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:21:58.543364 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:21:58.556965 extend-filesystems[1550]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 00:21:58.556965 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 6 00:21:58.556965 extend-filesystems[1550]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 6 00:21:58.557445 extend-filesystems[1484]: Resized filesystem in /dev/sda9 Nov 6 00:21:58.560169 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:21:58.559266 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:21:58.559860 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:21:58.607495 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:21:58.618663 ntpd[1494]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:21:58.618782 ntpd[1494]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:21:58.619710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: ---------------------------------------------------- Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: corporation. Support and training for ntp-4 are Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: available at https://www.nwtime.org/support Nov 6 00:21:58.621364 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: ---------------------------------------------------- Nov 6 00:21:58.618799 ntpd[1494]: ---------------------------------------------------- Nov 6 00:21:58.618813 ntpd[1494]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: proto: precision = 0.106 usec (-23) Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: basedate set to 2025-10-24 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: gps base set to 2025-10-26 (week 2390) Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Listen normally on 3 eth0 10.128.0.9:123 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: Listen normally on 4 lo [::1]:123 Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: bind(21) AF_INET6 [fe80::4001:aff:fe80:9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 6 00:21:58.631309 ntpd[1494]: 6 Nov 00:21:58 ntpd[1494]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:9%2]:123 Nov 6 00:21:58.618826 ntpd[1494]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:21:58.618839 ntpd[1494]: corporation. Support and training for ntp-4 are Nov 6 00:21:58.618852 ntpd[1494]: available at https://www.nwtime.org/support Nov 6 00:21:58.618866 ntpd[1494]: ---------------------------------------------------- Nov 6 00:21:58.626047 ntpd[1494]: proto: precision = 0.106 usec (-23) Nov 6 00:21:58.627206 ntpd[1494]: basedate set to 2025-10-24 Nov 6 00:21:58.627233 ntpd[1494]: gps base set to 2025-10-26 (week 2390) Nov 6 00:21:58.627413 ntpd[1494]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:21:58.627457 ntpd[1494]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:21:58.627724 ntpd[1494]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:21:58.627764 ntpd[1494]: Listen normally on 3 eth0 10.128.0.9:123 Nov 6 00:21:58.627805 ntpd[1494]: Listen normally on 4 lo [::1]:123 Nov 6 00:21:58.627848 ntpd[1494]: bind(21) AF_INET6 [fe80::4001:aff:fe80:9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 6 00:21:58.627875 ntpd[1494]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:9%2]:123 Nov 6 00:21:58.643015 kernel: ntpd[1494]: segfault at 24 ip 000055f88580caeb sp 00007ffc60284760 error 4 in ntpd[68aeb,55f8857aa000+80000] likely on CPU 1 (core 0, socket 0) Nov 6 00:21:58.643124 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 6 00:21:58.685436 systemd[1]: Starting sshkeys.service... Nov 6 00:21:58.698473 systemd-coredump[1577]: Process 1494 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 6 00:21:58.712947 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 6 00:21:58.728174 systemd[1]: Started systemd-coredump@0-1577-0.service - Process Core Dump (PID 1577/UID 0). Nov 6 00:21:58.795652 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:21:58.809803 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:21:58.930111 coreos-metadata[1584]: Nov 06 00:21:58.926 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 6 00:21:58.930111 coreos-metadata[1584]: Nov 06 00:21:58.928 INFO Fetch failed with 404: resource not found Nov 6 00:21:58.930111 coreos-metadata[1584]: Nov 06 00:21:58.928 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 6 00:21:58.932115 coreos-metadata[1584]: Nov 06 00:21:58.931 INFO Fetch successful Nov 6 00:21:58.932115 coreos-metadata[1584]: Nov 06 00:21:58.931 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 6 00:21:58.948246 coreos-metadata[1584]: Nov 06 00:21:58.948 INFO Fetch failed with 404: resource not found Nov 6 00:21:58.948451 coreos-metadata[1584]: Nov 06 00:21:58.948 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 6 00:21:58.950225 coreos-metadata[1584]: Nov 06 00:21:58.948 INFO Fetch failed with 404: resource not found Nov 6 00:21:58.950225 coreos-metadata[1584]: Nov 06 00:21:58.948 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 6 00:21:58.951744 coreos-metadata[1584]: Nov 06 00:21:58.951 INFO Fetch successful Nov 6 00:21:58.957035 unknown[1584]: wrote ssh authorized keys file for user: core Nov 6 00:21:59.011838 systemd-networkd[1405]: eth0: Gained IPv6LL Nov 6 00:21:59.021766 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:21:59.032799 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:21:59.045584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:59.053928 update-ssh-keys[1588]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:21:59.057048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:21:59.071518 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 6 00:21:59.073171 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:21:59.095649 systemd[1]: Finished sshkeys.service. Nov 6 00:21:59.107182 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:21:59.136713 init.sh[1594]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 6 00:21:59.136713 init.sh[1594]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 6 00:21:59.136713 init.sh[1594]: + /usr/bin/google_instance_setup Nov 6 00:21:59.161216 containerd[1518]: time="2025-11-06T00:21:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:21:59.166495 containerd[1518]: time="2025-11-06T00:21:59.163560620Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259211730Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.509µs" Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259252575Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259280657Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259523215Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259553963Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259592692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259671323Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:21:59.262612 containerd[1518]: time="2025-11-06T00:21:59.259689016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:21:59.258341 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:21:59.261789 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:21:59.271177 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272320517Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272361216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272386627Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272402470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272561748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272853851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272905173Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272931744Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.272979728Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.273360795Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:21:59.274253 containerd[1518]: time="2025-11-06T00:21:59.273473316Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:21:59.286595 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:21:59.286829 containerd[1518]: time="2025-11-06T00:21:59.286588727Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.286945235Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287024367Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287049956Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287081509Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287128115Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287151264Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287169801Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287186960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287203984Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287221257Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287242029Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287398301Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287425097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:21:59.289079 containerd[1518]: time="2025-11-06T00:21:59.287451724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287469986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287500866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287531630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287553230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287570892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287590632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287618651Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287638886Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287745883Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:21:59.289723 containerd[1518]: time="2025-11-06T00:21:59.287768629Z" level=info msg="Start snapshots syncer" Nov 6 00:21:59.292407 containerd[1518]: time="2025-11-06T00:21:59.291247084Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:21:59.292407 containerd[1518]: time="2025-11-06T00:21:59.291635167Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:21:59.292650 containerd[1518]: time="2025-11-06T00:21:59.291709228Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:21:59.292650 containerd[1518]: time="2025-11-06T00:21:59.291832687Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:21:59.292650 containerd[1518]: time="2025-11-06T00:21:59.292011999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:21:59.292650 containerd[1518]: time="2025-11-06T00:21:59.292049795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.292079379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301275859Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301309055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301328769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301348183Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301391305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301413240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301434297Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301503190Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301527878Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301544171Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301560372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301573914Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:21:59.302147 containerd[1518]: time="2025-11-06T00:21:59.301589132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301606469Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301632239Z" level=info msg="runtime interface created" Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301642015Z" level=info msg="created NRI interface" Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301657243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301678985Z" level=info msg="Connect containerd service" Nov 6 00:21:59.302813 containerd[1518]: time="2025-11-06T00:21:59.301719612Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:21:59.305641 containerd[1518]: time="2025-11-06T00:21:59.303775036Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:21:59.344217 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:21:59.344566 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:21:59.360139 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:21:59.437382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:21:59.454954 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:21:59.466450 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:21:59.475301 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:21:59.577666 systemd-logind[1500]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:21:59.577713 systemd-logind[1500]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 6 00:21:59.577746 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:21:59.582985 systemd-logind[1500]: New seat seat0. Nov 6 00:21:59.589489 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:21:59.778521 containerd[1518]: time="2025-11-06T00:21:59.778461922Z" level=info msg="Start subscribing containerd event" Nov 6 00:21:59.781327 containerd[1518]: time="2025-11-06T00:21:59.781233763Z" level=info msg="Start recovering state" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781538910Z" level=info msg="Start event monitor" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781564509Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781575835Z" level=info msg="Start streaming server" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781597147Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781610379Z" level=info msg="runtime interface starting up..." Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781619896Z" level=info msg="starting plugins..." Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781639139Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.779493272Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:21:59.781912 containerd[1518]: time="2025-11-06T00:21:59.781848430Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:21:59.782233 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:21:59.789303 containerd[1518]: time="2025-11-06T00:21:59.788291042Z" level=info msg="containerd successfully booted in 0.648540s" Nov 6 00:21:59.880257 systemd-coredump[1582]: Process 1494 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1494: #0 0x000055f88580caeb n/a (ntpd + 0x68aeb) #1 0x000055f8857b5cdf n/a (ntpd + 0x11cdf) #2 0x000055f8857b6575 n/a (ntpd + 0x12575) #3 0x000055f8857b1d8a n/a (ntpd + 0xdd8a) #4 0x000055f8857b35d3 n/a (ntpd + 0xf5d3) #5 0x000055f8857bbfd1 n/a (ntpd + 0x17fd1) #6 0x000055f8857acc2d n/a (ntpd + 0x8c2d) #7 0x00007f88be22e16c n/a (libc.so.6 + 0x2716c) #8 0x00007f88be22e229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055f8857acc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 6 00:21:59.883069 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 6 00:21:59.883364 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 6 00:21:59.893253 systemd[1]: systemd-coredump@0-1577-0.service: Deactivated successfully. Nov 6 00:21:59.903544 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 6 00:21:59.906075 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 6 00:21:59.909248 dbus-daemon[1474]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1572 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 6 00:21:59.924617 systemd[1]: Starting polkit.service - Authorization Manager... Nov 6 00:22:00.055589 tar[1516]: linux-amd64/README.md Nov 6 00:22:00.060775 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:00.069709 systemd[1]: Started ntpd.service - Network Time Service. Nov 6 00:22:00.101261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:22:00.135584 ntpd[1653]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:22:00.136168 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:22:00.137885 ntpd[1653]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: ---------------------------------------------------- Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: corporation. Support and training for ntp-4 are Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: available at https://www.nwtime.org/support Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: ---------------------------------------------------- Nov 6 00:22:00.140077 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: proto: precision = 0.113 usec (-23) Nov 6 00:22:00.137916 ntpd[1653]: ---------------------------------------------------- Nov 6 00:22:00.137930 ntpd[1653]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:22:00.137943 ntpd[1653]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:22:00.137956 ntpd[1653]: corporation. Support and training for ntp-4 are Nov 6 00:22:00.137969 ntpd[1653]: available at https://www.nwtime.org/support Nov 6 00:22:00.137982 ntpd[1653]: ---------------------------------------------------- Nov 6 00:22:00.138946 ntpd[1653]: proto: precision = 0.113 usec (-23) Nov 6 00:22:00.144352 ntpd[1653]: basedate set to 2025-10-24 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: basedate set to 2025-10-24 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: gps base set to 2025-10-26 (week 2390) Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen normally on 3 eth0 10.128.0.9:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen normally on 4 lo [::1]:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:9%2]:123 Nov 6 00:22:00.145518 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: Listening on routing socket on fd #22 for interface updates Nov 6 00:22:00.144377 ntpd[1653]: gps base set to 2025-10-26 (week 2390) Nov 6 00:22:00.144489 ntpd[1653]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:22:00.144525 ntpd[1653]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:22:00.144760 ntpd[1653]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:22:00.144800 ntpd[1653]: Listen normally on 3 eth0 10.128.0.9:123 Nov 6 00:22:00.144841 ntpd[1653]: Listen normally on 4 lo [::1]:123 Nov 6 00:22:00.144879 ntpd[1653]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:9%2]:123 Nov 6 00:22:00.144917 ntpd[1653]: Listening on routing socket on fd #22 for interface updates Nov 6 00:22:00.153790 ntpd[1653]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:22:00.155890 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:22:00.155890 ntpd[1653]: 6 Nov 00:22:00 ntpd[1653]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:22:00.153831 ntpd[1653]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:22:00.174506 polkitd[1650]: Started polkitd version 126 Nov 6 00:22:00.187286 polkitd[1650]: Loading rules from directory /etc/polkit-1/rules.d Nov 6 00:22:00.190324 polkitd[1650]: Loading rules from directory /run/polkit-1/rules.d Nov 6 00:22:00.190394 polkitd[1650]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:22:00.192711 polkitd[1650]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 6 00:22:00.192902 polkitd[1650]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:22:00.194211 polkitd[1650]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 6 00:22:00.196320 polkitd[1650]: Finished loading, compiling and executing 2 rules Nov 6 00:22:00.196805 systemd[1]: Started polkit.service - Authorization Manager. Nov 6 00:22:00.198828 dbus-daemon[1474]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 6 00:22:00.200232 polkitd[1650]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 6 00:22:00.230491 systemd-hostnamed[1572]: Hostname set to (transient) Nov 6 00:22:00.232160 systemd-resolved[1362]: System hostname changed to 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e'. Nov 6 00:22:00.351008 instance-setup[1608]: INFO Running google_set_multiqueue. Nov 6 00:22:00.372606 instance-setup[1608]: INFO Set channels for eth0 to 2. Nov 6 00:22:00.377825 instance-setup[1608]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 6 00:22:00.380212 instance-setup[1608]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 6 00:22:00.380750 instance-setup[1608]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 6 00:22:00.382225 instance-setup[1608]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 6 00:22:00.384014 instance-setup[1608]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 6 00:22:00.384512 instance-setup[1608]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 6 00:22:00.384829 instance-setup[1608]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 6 00:22:00.386702 instance-setup[1608]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 6 00:22:00.396166 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 6 00:22:00.400769 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 6 00:22:00.403073 instance-setup[1608]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 6 00:22:00.403201 instance-setup[1608]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 6 00:22:00.426117 init.sh[1594]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 6 00:22:00.480725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:22:00.493267 systemd[1]: Started sshd@0-10.128.0.9:22-147.75.109.163:59338.service - OpenSSH per-connection server daemon (147.75.109.163:59338). Nov 6 00:22:00.633019 startup-script[1695]: INFO Starting startup scripts. Nov 6 00:22:00.638652 startup-script[1695]: INFO No startup scripts found in metadata. Nov 6 00:22:00.638937 startup-script[1695]: INFO Finished running startup scripts. Nov 6 00:22:00.660624 init.sh[1594]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 6 00:22:00.662106 init.sh[1594]: + daemon_pids=() Nov 6 00:22:00.662106 init.sh[1594]: + for d in accounts clock_skew network Nov 6 00:22:00.662106 init.sh[1594]: + daemon_pids+=($!) Nov 6 00:22:00.662106 init.sh[1594]: + for d in accounts clock_skew network Nov 6 00:22:00.662106 init.sh[1594]: + daemon_pids+=($!) Nov 6 00:22:00.662106 init.sh[1594]: + for d in accounts clock_skew network Nov 6 00:22:00.662106 init.sh[1594]: + daemon_pids+=($!) Nov 6 00:22:00.662106 init.sh[1594]: + NOTIFY_SOCKET=/run/systemd/notify Nov 6 00:22:00.662106 init.sh[1594]: + /usr/bin/systemd-notify --ready Nov 6 00:22:00.662758 init.sh[1702]: + /usr/bin/google_accounts_daemon Nov 6 00:22:00.663466 init.sh[1703]: + /usr/bin/google_clock_skew_daemon Nov 6 00:22:00.663794 init.sh[1704]: + /usr/bin/google_network_daemon Nov 6 00:22:00.676809 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 6 00:22:00.691281 init.sh[1594]: + wait -n 1702 1703 1704 Nov 6 00:22:00.912804 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 59338 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:00.917185 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:00.935223 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:22:00.946819 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:22:00.982185 systemd-logind[1500]: New session 1 of user core. Nov 6 00:22:01.006994 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:22:01.025840 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:22:01.075230 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:22:01.082192 systemd-logind[1500]: New session c1 of user core. Nov 6 00:22:01.174364 google-clock-skew[1703]: INFO Starting Google Clock Skew daemon. Nov 6 00:22:01.198900 google-clock-skew[1703]: INFO Clock drift token has changed: 0. Nov 6 00:22:01.228704 google-networking[1704]: INFO Starting Google Networking daemon. Nov 6 00:22:01.309677 groupadd[1722]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 6 00:22:01.316936 groupadd[1722]: group added to /etc/gshadow: name=google-sudoers Nov 6 00:22:01.399217 groupadd[1722]: new group: name=google-sudoers, GID=1000 Nov 6 00:22:01.434817 systemd[1714]: Queued start job for default target default.target. Nov 6 00:22:01.437265 systemd[1714]: Created slice app.slice - User Application Slice. Nov 6 00:22:01.437311 systemd[1714]: Reached target paths.target - Paths. Nov 6 00:22:01.437870 systemd[1714]: Reached target timers.target - Timers. Nov 6 00:22:01.444192 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:22:01.448603 google-accounts[1702]: INFO Starting Google Accounts daemon. Nov 6 00:22:01.463347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:01.467652 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:22:01.469490 systemd[1714]: Reached target sockets.target - Sockets. Nov 6 00:22:01.469581 systemd[1714]: Reached target basic.target - Basic System. Nov 6 00:22:01.469655 systemd[1714]: Reached target default.target - Main User Target. Nov 6 00:22:01.469710 systemd[1714]: Startup finished in 367ms. Nov 6 00:22:01.472509 google-accounts[1702]: WARNING OS Login not installed. Nov 6 00:22:01.474021 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:22:01.475446 google-accounts[1702]: INFO Creating a new user account for 0. Nov 6 00:22:01.479699 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:01.480429 init.sh[1737]: useradd: invalid user name '0': use --badname to ignore Nov 6 00:22:01.481595 google-accounts[1702]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 6 00:22:01.495342 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:22:01.504470 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:22:01.513472 systemd[1]: Startup finished in 3.750s (kernel) + 7.686s (initrd) + 8.097s (userspace) = 19.535s. Nov 6 00:22:02.001371 google-clock-skew[1703]: INFO Synced system time with hardware clock. Nov 6 00:22:02.001693 systemd-resolved[1362]: Clock change detected. Flushing caches. Nov 6 00:22:02.219852 systemd[1]: Started sshd@1-10.128.0.9:22-147.75.109.163:59342.service - OpenSSH per-connection server daemon (147.75.109.163:59342). Nov 6 00:22:02.535640 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 59342 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:02.537945 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:02.548854 systemd-logind[1500]: New session 2 of user core. Nov 6 00:22:02.551981 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:22:02.753330 sshd[1755]: Connection closed by 147.75.109.163 port 59342 Nov 6 00:22:02.755052 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:02.761491 systemd[1]: sshd@1-10.128.0.9:22-147.75.109.163:59342.service: Deactivated successfully. Nov 6 00:22:02.764788 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:22:02.768281 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:22:02.770314 systemd-logind[1500]: Removed session 2. Nov 6 00:22:02.809350 systemd[1]: Started sshd@2-10.128.0.9:22-147.75.109.163:59350.service - OpenSSH per-connection server daemon (147.75.109.163:59350). Nov 6 00:22:02.890340 kubelet[1735]: E1106 00:22:02.890275 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:02.893108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:02.893367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:02.893921 systemd[1]: kubelet.service: Consumed 1.333s CPU time, 269.9M memory peak. Nov 6 00:22:03.129826 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 59350 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:03.131173 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:03.138503 systemd-logind[1500]: New session 3 of user core. Nov 6 00:22:03.145992 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:22:03.345538 sshd[1765]: Connection closed by 147.75.109.163 port 59350 Nov 6 00:22:03.346394 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:03.352143 systemd[1]: sshd@2-10.128.0.9:22-147.75.109.163:59350.service: Deactivated successfully. Nov 6 00:22:03.354360 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:22:03.355520 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:22:03.357508 systemd-logind[1500]: Removed session 3. Nov 6 00:22:03.404045 systemd[1]: Started sshd@3-10.128.0.9:22-147.75.109.163:59354.service - OpenSSH per-connection server daemon (147.75.109.163:59354). Nov 6 00:22:03.712025 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 59354 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:03.713675 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:03.720826 systemd-logind[1500]: New session 4 of user core. Nov 6 00:22:03.727974 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:22:03.929044 sshd[1774]: Connection closed by 147.75.109.163 port 59354 Nov 6 00:22:03.929881 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:03.935456 systemd[1]: sshd@3-10.128.0.9:22-147.75.109.163:59354.service: Deactivated successfully. Nov 6 00:22:03.937734 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:22:03.939060 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:22:03.941111 systemd-logind[1500]: Removed session 4. Nov 6 00:22:03.979923 systemd[1]: Started sshd@4-10.128.0.9:22-147.75.109.163:59362.service - OpenSSH per-connection server daemon (147.75.109.163:59362). Nov 6 00:22:04.286451 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 59362 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:04.288107 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:04.294831 systemd-logind[1500]: New session 5 of user core. Nov 6 00:22:04.300985 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:22:04.479261 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:22:04.479742 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:04.496037 sudo[1784]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:04.538446 sshd[1783]: Connection closed by 147.75.109.163 port 59362 Nov 6 00:22:04.539962 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:04.546102 systemd[1]: sshd@4-10.128.0.9:22-147.75.109.163:59362.service: Deactivated successfully. Nov 6 00:22:04.548376 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:22:04.549551 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:22:04.551612 systemd-logind[1500]: Removed session 5. Nov 6 00:22:04.592123 systemd[1]: Started sshd@5-10.128.0.9:22-147.75.109.163:59376.service - OpenSSH per-connection server daemon (147.75.109.163:59376). Nov 6 00:22:04.906455 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 59376 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:04.908347 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:04.915528 systemd-logind[1500]: New session 6 of user core. Nov 6 00:22:04.921990 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:22:05.088223 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:22:05.088698 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:05.128220 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:05.143136 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:22:05.143612 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:05.157410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:05.221255 augenrules[1817]: No rules Nov 6 00:22:05.223037 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:05.223481 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:05.225607 sudo[1794]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:05.268743 sshd[1793]: Connection closed by 147.75.109.163 port 59376 Nov 6 00:22:05.269543 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:05.275871 systemd[1]: sshd@5-10.128.0.9:22-147.75.109.163:59376.service: Deactivated successfully. Nov 6 00:22:05.278182 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:22:05.279735 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:22:05.281467 systemd-logind[1500]: Removed session 6. Nov 6 00:22:05.322891 systemd[1]: Started sshd@6-10.128.0.9:22-147.75.109.163:59386.service - OpenSSH per-connection server daemon (147.75.109.163:59386). Nov 6 00:22:05.628706 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 59386 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:22:05.630388 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:05.636301 systemd-logind[1500]: New session 7 of user core. Nov 6 00:22:05.646015 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:22:05.810209 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:22:05.810688 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:06.302011 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:22:06.321371 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:22:06.680390 dockerd[1848]: time="2025-11-06T00:22:06.679985122Z" level=info msg="Starting up" Nov 6 00:22:06.683686 dockerd[1848]: time="2025-11-06T00:22:06.683538971Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:22:06.698688 dockerd[1848]: time="2025-11-06T00:22:06.698605693Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:22:06.721086 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1782559806-merged.mount: Deactivated successfully. Nov 6 00:22:06.832462 dockerd[1848]: time="2025-11-06T00:22:06.832215783Z" level=info msg="Loading containers: start." Nov 6 00:22:06.850079 kernel: Initializing XFRM netlink socket Nov 6 00:22:07.182410 systemd-networkd[1405]: docker0: Link UP Nov 6 00:22:07.188021 dockerd[1848]: time="2025-11-06T00:22:07.187948341Z" level=info msg="Loading containers: done." Nov 6 00:22:07.206591 dockerd[1848]: time="2025-11-06T00:22:07.205731324Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:22:07.206828 dockerd[1848]: time="2025-11-06T00:22:07.206678718Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:22:07.206896 dockerd[1848]: time="2025-11-06T00:22:07.206876253Z" level=info msg="Initializing buildkit" Nov 6 00:22:07.236926 dockerd[1848]: time="2025-11-06T00:22:07.236880366Z" level=info msg="Completed buildkit initialization" Nov 6 00:22:07.248654 dockerd[1848]: time="2025-11-06T00:22:07.248611088Z" level=info msg="Daemon has completed initialization" Nov 6 00:22:07.248835 dockerd[1848]: time="2025-11-06T00:22:07.248669891Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:22:07.249151 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:22:08.235767 containerd[1518]: time="2025-11-06T00:22:08.235715006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:22:08.747498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806954270.mount: Deactivated successfully. Nov 6 00:22:10.428462 containerd[1518]: time="2025-11-06T00:22:10.428390250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:10.429803 containerd[1518]: time="2025-11-06T00:22:10.429653944Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30122476" Nov 6 00:22:10.431216 containerd[1518]: time="2025-11-06T00:22:10.431153800Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:10.434490 containerd[1518]: time="2025-11-06T00:22:10.434431415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:10.435798 containerd[1518]: time="2025-11-06T00:22:10.435667050Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.199882431s" Nov 6 00:22:10.435798 containerd[1518]: time="2025-11-06T00:22:10.435714428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:22:10.438724 containerd[1518]: time="2025-11-06T00:22:10.438348618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:22:12.024103 containerd[1518]: time="2025-11-06T00:22:12.024039005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:12.025725 containerd[1518]: time="2025-11-06T00:22:12.025415885Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26022778" Nov 6 00:22:12.026988 containerd[1518]: time="2025-11-06T00:22:12.026944525Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:12.030658 containerd[1518]: time="2025-11-06T00:22:12.030620389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:12.031958 containerd[1518]: time="2025-11-06T00:22:12.031920166Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.593526861s" Nov 6 00:22:12.032120 containerd[1518]: time="2025-11-06T00:22:12.032093569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:22:12.033270 containerd[1518]: time="2025-11-06T00:22:12.033235646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:22:13.027807 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:13.031590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:13.349977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:13.365968 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:13.442708 containerd[1518]: time="2025-11-06T00:22:13.442618189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:13.445792 containerd[1518]: time="2025-11-06T00:22:13.444843903Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20157484" Nov 6 00:22:13.446440 containerd[1518]: time="2025-11-06T00:22:13.446400253Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:13.453475 containerd[1518]: time="2025-11-06T00:22:13.453430353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:13.453598 kubelet[2132]: E1106 00:22:13.453525 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:13.455155 containerd[1518]: time="2025-11-06T00:22:13.455110882Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.42183457s" Nov 6 00:22:13.455321 containerd[1518]: time="2025-11-06T00:22:13.455159769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:22:13.456712 containerd[1518]: time="2025-11-06T00:22:13.456683729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:22:13.461271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:13.461512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:13.462144 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110M memory peak. Nov 6 00:22:14.669147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863553394.mount: Deactivated successfully. Nov 6 00:22:15.404215 containerd[1518]: time="2025-11-06T00:22:15.404141248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:15.405724 containerd[1518]: time="2025-11-06T00:22:15.405497547Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31931364" Nov 6 00:22:15.406965 containerd[1518]: time="2025-11-06T00:22:15.406921444Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:15.409723 containerd[1518]: time="2025-11-06T00:22:15.409679111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:15.410904 containerd[1518]: time="2025-11-06T00:22:15.410613100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.953757125s" Nov 6 00:22:15.410904 containerd[1518]: time="2025-11-06T00:22:15.410660155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:22:15.411322 containerd[1518]: time="2025-11-06T00:22:15.411292292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:22:15.842060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450034415.mount: Deactivated successfully. Nov 6 00:22:17.076182 containerd[1518]: time="2025-11-06T00:22:17.076112148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:17.077647 containerd[1518]: time="2025-11-06T00:22:17.077599912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Nov 6 00:22:17.080182 containerd[1518]: time="2025-11-06T00:22:17.080116814Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:17.083219 containerd[1518]: time="2025-11-06T00:22:17.083147379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:17.085146 containerd[1518]: time="2025-11-06T00:22:17.084604177Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.673163156s" Nov 6 00:22:17.085146 containerd[1518]: time="2025-11-06T00:22:17.084646602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:22:17.085419 containerd[1518]: time="2025-11-06T00:22:17.085390910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:22:17.578963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925115812.mount: Deactivated successfully. Nov 6 00:22:17.587449 containerd[1518]: time="2025-11-06T00:22:17.587380506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:17.588618 containerd[1518]: time="2025-11-06T00:22:17.588563129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 6 00:22:17.589923 containerd[1518]: time="2025-11-06T00:22:17.589878243Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:17.593874 containerd[1518]: time="2025-11-06T00:22:17.592827887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:17.593874 containerd[1518]: time="2025-11-06T00:22:17.593666845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.21274ms" Nov 6 00:22:17.593874 containerd[1518]: time="2025-11-06T00:22:17.593704166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:22:17.594590 containerd[1518]: time="2025-11-06T00:22:17.594553430Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:22:18.025240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692666910.mount: Deactivated successfully. Nov 6 00:22:20.413230 containerd[1518]: time="2025-11-06T00:22:20.413159994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:20.414804 containerd[1518]: time="2025-11-06T00:22:20.414633557Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58384071" Nov 6 00:22:20.416040 containerd[1518]: time="2025-11-06T00:22:20.415975229Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:20.419499 containerd[1518]: time="2025-11-06T00:22:20.419436880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:20.421192 containerd[1518]: time="2025-11-06T00:22:20.420836151Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.826242635s" Nov 6 00:22:20.421192 containerd[1518]: time="2025-11-06T00:22:20.420876823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:22:23.527808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:22:23.533015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:23.912971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:23.926310 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:23.993649 kubelet[2287]: E1106 00:22:23.993589 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:23.997697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:24.000011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:24.000509 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.3M memory peak. Nov 6 00:22:25.048597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:25.048915 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.3M memory peak. Nov 6 00:22:25.052044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:25.094061 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-7.scope)... Nov 6 00:22:25.094093 systemd[1]: Reloading... Nov 6 00:22:25.279797 zram_generator::config[2346]: No configuration found. Nov 6 00:22:25.583435 systemd[1]: Reloading finished in 488 ms. Nov 6 00:22:25.676854 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:22:25.677043 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:22:25.677516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:25.677589 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.2M memory peak. Nov 6 00:22:25.679690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:25.973883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:25.987343 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:22:26.049188 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:26.049188 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:22:26.049188 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:26.049788 kubelet[2397]: I1106 00:22:26.049245 2397 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:22:26.786939 kubelet[2397]: I1106 00:22:26.786874 2397 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:22:26.786939 kubelet[2397]: I1106 00:22:26.786913 2397 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:22:26.787369 kubelet[2397]: I1106 00:22:26.787323 2397 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:22:26.836149 kubelet[2397]: I1106 00:22:26.835832 2397 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:22:26.836149 kubelet[2397]: E1106 00:22:26.835947 2397 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:22:26.845840 kubelet[2397]: I1106 00:22:26.845733 2397 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:22:26.851394 kubelet[2397]: I1106 00:22:26.851359 2397 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:22:26.851713 kubelet[2397]: I1106 00:22:26.851661 2397 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:22:26.851977 kubelet[2397]: I1106 00:22:26.851695 2397 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:22:26.851977 kubelet[2397]: I1106 00:22:26.851971 2397 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:22:26.852219 kubelet[2397]: I1106 00:22:26.851990 2397 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:22:26.852219 kubelet[2397]: I1106 00:22:26.852151 2397 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:26.857943 kubelet[2397]: I1106 00:22:26.857301 2397 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:22:26.857943 kubelet[2397]: I1106 00:22:26.857343 2397 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:22:26.860660 kubelet[2397]: I1106 00:22:26.860351 2397 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:22:26.860660 kubelet[2397]: I1106 00:22:26.860394 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:22:26.863527 kubelet[2397]: E1106 00:22:26.863493 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:22:26.866989 kubelet[2397]: E1106 00:22:26.866944 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:22:26.867849 kubelet[2397]: I1106 00:22:26.867819 2397 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:22:26.868609 kubelet[2397]: I1106 00:22:26.868508 2397 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:22:26.869802 kubelet[2397]: W1106 00:22:26.869774 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:22:26.890475 kubelet[2397]: I1106 00:22:26.890106 2397 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:22:26.890475 kubelet[2397]: I1106 00:22:26.890186 2397 server.go:1289] "Started kubelet" Nov 6 00:22:26.891534 kubelet[2397]: I1106 00:22:26.891422 2397 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:22:26.892775 kubelet[2397]: I1106 00:22:26.892705 2397 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:22:26.900986 kubelet[2397]: I1106 00:22:26.900915 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:22:26.901450 kubelet[2397]: I1106 00:22:26.901426 2397 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:22:26.903455 kubelet[2397]: I1106 00:22:26.903398 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:22:26.905337 kubelet[2397]: E1106 00:22:26.902604 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e.1875430bbd9a94da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,UID:ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,},FirstTimestamp:2025-11-06 00:22:26.890142938 +0000 UTC m=+0.896579033,LastTimestamp:2025-11-06 00:22:26.890142938 +0000 UTC m=+0.896579033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,}" Nov 6 00:22:26.906707 kubelet[2397]: I1106 00:22:26.906683 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:22:26.910899 kubelet[2397]: E1106 00:22:26.910873 2397 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:22:26.912456 kubelet[2397]: E1106 00:22:26.911377 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" Nov 6 00:22:26.912456 kubelet[2397]: I1106 00:22:26.911421 2397 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:22:26.912456 kubelet[2397]: I1106 00:22:26.911701 2397 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:22:26.912456 kubelet[2397]: I1106 00:22:26.911790 2397 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:22:26.912956 kubelet[2397]: E1106 00:22:26.912924 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:22:26.913307 kubelet[2397]: I1106 00:22:26.913283 2397 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:22:26.913634 kubelet[2397]: I1106 00:22:26.913609 2397 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:22:26.914779 kubelet[2397]: E1106 00:22:26.914719 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="200ms" Nov 6 00:22:26.915852 kubelet[2397]: I1106 00:22:26.915826 2397 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:22:26.946081 kubelet[2397]: I1106 00:22:26.946016 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:22:26.948136 kubelet[2397]: I1106 00:22:26.948095 2397 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:22:26.948136 kubelet[2397]: I1106 00:22:26.948121 2397 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:22:26.948324 kubelet[2397]: I1106 00:22:26.948145 2397 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:26.950959 kubelet[2397]: I1106 00:22:26.950804 2397 policy_none.go:49] "None policy: Start" Nov 6 00:22:26.950959 kubelet[2397]: I1106 00:22:26.950833 2397 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:22:26.950959 kubelet[2397]: I1106 00:22:26.950875 2397 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:22:26.952533 kubelet[2397]: I1106 00:22:26.952472 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:22:26.952533 kubelet[2397]: I1106 00:22:26.952498 2397 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:22:26.952533 kubelet[2397]: I1106 00:22:26.952525 2397 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:22:26.952533 kubelet[2397]: I1106 00:22:26.952536 2397 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:22:26.953484 kubelet[2397]: E1106 00:22:26.952597 2397 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:22:26.954039 kubelet[2397]: E1106 00:22:26.953651 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:22:26.964719 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:22:26.975358 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:22:26.988503 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:22:26.990921 kubelet[2397]: E1106 00:22:26.990889 2397 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:22:26.991824 kubelet[2397]: I1106 00:22:26.991159 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:22:26.991824 kubelet[2397]: I1106 00:22:26.991179 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:22:26.991824 kubelet[2397]: I1106 00:22:26.991454 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:22:26.993977 kubelet[2397]: E1106 00:22:26.993922 2397 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:22:26.994250 kubelet[2397]: E1106 00:22:26.994221 2397 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" Nov 6 00:22:27.074933 systemd[1]: Created slice kubepods-burstable-pod68dab48e38e2b2078cd5aee421afdb67.slice - libcontainer container kubepods-burstable-pod68dab48e38e2b2078cd5aee421afdb67.slice. Nov 6 00:22:27.087709 kubelet[2397]: E1106 00:22:27.087631 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.094021 systemd[1]: Created slice kubepods-burstable-pod1b4d57ffbe38d31533367d5ee20db4da.slice - libcontainer container kubepods-burstable-pod1b4d57ffbe38d31533367d5ee20db4da.slice. Nov 6 00:22:27.096247 kubelet[2397]: I1106 00:22:27.096201 2397 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.096677 kubelet[2397]: E1106 00:22:27.096642 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.106630 kubelet[2397]: E1106 00:22:27.106384 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.110345 systemd[1]: Created slice kubepods-burstable-pod8ec2c8fa39bdcb257b9e1d88596349d6.slice - libcontainer container kubepods-burstable-pod8ec2c8fa39bdcb257b9e1d88596349d6.slice. Nov 6 00:22:27.112428 kubelet[2397]: I1106 00:22:27.112390 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.113333 kubelet[2397]: E1106 00:22:27.113284 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.115727 kubelet[2397]: E1106 00:22:27.115688 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="400ms" Nov 6 00:22:27.213230 kubelet[2397]: I1106 00:22:27.213144 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213577 kubelet[2397]: I1106 00:22:27.213240 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213577 kubelet[2397]: I1106 00:22:27.213286 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213577 kubelet[2397]: I1106 00:22:27.213349 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213577 kubelet[2397]: I1106 00:22:27.213377 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213722 kubelet[2397]: I1106 00:22:27.213403 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213722 kubelet[2397]: I1106 00:22:27.213432 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.213722 kubelet[2397]: I1106 00:22:27.213459 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ec2c8fa39bdcb257b9e1d88596349d6-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"8ec2c8fa39bdcb257b9e1d88596349d6\") " pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.302128 kubelet[2397]: I1106 00:22:27.302083 2397 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.302567 kubelet[2397]: E1106 00:22:27.302511 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.389982 containerd[1518]: time="2025-11-06T00:22:27.389826334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:68dab48e38e2b2078cd5aee421afdb67,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:27.407941 containerd[1518]: time="2025-11-06T00:22:27.407692200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:1b4d57ffbe38d31533367d5ee20db4da,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:27.427299 containerd[1518]: time="2025-11-06T00:22:27.427196894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:8ec2c8fa39bdcb257b9e1d88596349d6,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:27.430159 containerd[1518]: time="2025-11-06T00:22:27.430117067Z" level=info msg="connecting to shim ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8" address="unix:///run/containerd/s/ce2b4944cfe64b973fad78fdaa5994825eda663fe1dc27e0753cbecfaf91ccb9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:27.471585 containerd[1518]: time="2025-11-06T00:22:27.470913020Z" level=info msg="connecting to shim cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224" address="unix:///run/containerd/s/be0b83463aab95d6feb80b1f4d856da93e86c5b42829b0cf798e994ba4ca2396" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:27.491281 containerd[1518]: time="2025-11-06T00:22:27.491226143Z" level=info msg="connecting to shim 6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18" address="unix:///run/containerd/s/f03c69ba8b472661b45c75f151b2f33729ec6074c96c5df990ab2dc5cd31135b" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:27.505209 systemd[1]: Started cri-containerd-ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8.scope - libcontainer container ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8. Nov 6 00:22:27.519260 kubelet[2397]: E1106 00:22:27.519175 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="800ms" Nov 6 00:22:27.561514 systemd[1]: Started cri-containerd-cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224.scope - libcontainer container cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224. Nov 6 00:22:27.569999 systemd[1]: Started cri-containerd-6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18.scope - libcontainer container 6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18. Nov 6 00:22:27.637731 containerd[1518]: time="2025-11-06T00:22:27.637609615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:68dab48e38e2b2078cd5aee421afdb67,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8\"" Nov 6 00:22:27.643420 kubelet[2397]: E1106 00:22:27.643285 2397 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956" Nov 6 00:22:27.652163 containerd[1518]: time="2025-11-06T00:22:27.652107261Z" level=info msg="CreateContainer within sandbox \"ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:22:27.666796 containerd[1518]: time="2025-11-06T00:22:27.666344148Z" level=info msg="Container f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:27.690272 containerd[1518]: time="2025-11-06T00:22:27.690215175Z" level=info msg="CreateContainer within sandbox \"ec36f0f734d67cdd1805ab34ab84734e754b61e5313cdd9996181bb6b193b9b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe\"" Nov 6 00:22:27.691604 containerd[1518]: time="2025-11-06T00:22:27.691568014Z" level=info msg="StartContainer for \"f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe\"" Nov 6 00:22:27.693393 containerd[1518]: time="2025-11-06T00:22:27.693357381Z" level=info msg="connecting to shim f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe" address="unix:///run/containerd/s/ce2b4944cfe64b973fad78fdaa5994825eda663fe1dc27e0753cbecfaf91ccb9" protocol=ttrpc version=3 Nov 6 00:22:27.700497 containerd[1518]: time="2025-11-06T00:22:27.700386916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:8ec2c8fa39bdcb257b9e1d88596349d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18\"" Nov 6 00:22:27.704605 kubelet[2397]: E1106 00:22:27.704213 2397 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956" Nov 6 00:22:27.708822 containerd[1518]: time="2025-11-06T00:22:27.708789674Z" level=info msg="CreateContainer within sandbox \"6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:22:27.709802 kubelet[2397]: I1106 00:22:27.709488 2397 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.710310 kubelet[2397]: E1106 00:22:27.710162 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.727069 containerd[1518]: time="2025-11-06T00:22:27.727036014Z" level=info msg="Container 5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:27.728285 systemd[1]: Started cri-containerd-f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe.scope - libcontainer container f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe. Nov 6 00:22:27.734283 containerd[1518]: time="2025-11-06T00:22:27.734246734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e,Uid:1b4d57ffbe38d31533367d5ee20db4da,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224\"" Nov 6 00:22:27.737544 kubelet[2397]: E1106 00:22:27.737489 2397 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbb" Nov 6 00:22:27.739540 containerd[1518]: time="2025-11-06T00:22:27.739228779Z" level=info msg="CreateContainer within sandbox \"6e2c1947c493c8437cb3e9136089ff397737d3d8afc96292608cb4270cf81f18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320\"" Nov 6 00:22:27.741658 containerd[1518]: time="2025-11-06T00:22:27.741605789Z" level=info msg="StartContainer for \"5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320\"" Nov 6 00:22:27.745472 containerd[1518]: time="2025-11-06T00:22:27.745422748Z" level=info msg="connecting to shim 5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320" address="unix:///run/containerd/s/f03c69ba8b472661b45c75f151b2f33729ec6074c96c5df990ab2dc5cd31135b" protocol=ttrpc version=3 Nov 6 00:22:27.746628 containerd[1518]: time="2025-11-06T00:22:27.746587812Z" level=info msg="CreateContainer within sandbox \"cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:22:27.759787 containerd[1518]: time="2025-11-06T00:22:27.758998670Z" level=info msg="Container a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:27.776361 containerd[1518]: time="2025-11-06T00:22:27.776319821Z" level=info msg="CreateContainer within sandbox \"cb9178e6502047860245613f07fc4c89663f5d91d556837e8eedc2372d097224\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59\"" Nov 6 00:22:27.778559 containerd[1518]: time="2025-11-06T00:22:27.778516495Z" level=info msg="StartContainer for \"a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59\"" Nov 6 00:22:27.783659 containerd[1518]: time="2025-11-06T00:22:27.783616345Z" level=info msg="connecting to shim a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59" address="unix:///run/containerd/s/be0b83463aab95d6feb80b1f4d856da93e86c5b42829b0cf798e994ba4ca2396" protocol=ttrpc version=3 Nov 6 00:22:27.785227 systemd[1]: Started cri-containerd-5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320.scope - libcontainer container 5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320. Nov 6 00:22:27.823228 systemd[1]: Started cri-containerd-a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59.scope - libcontainer container a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59. Nov 6 00:22:27.872782 containerd[1518]: time="2025-11-06T00:22:27.872702300Z" level=info msg="StartContainer for \"f6e9b6a67963d27c31a960f3e71354cb55a5ad034e2fa478b6e880c5619afabe\" returns successfully" Nov 6 00:22:27.936768 containerd[1518]: time="2025-11-06T00:22:27.936259918Z" level=info msg="StartContainer for \"a571d4a29db0b3cd6a8de8f0f7a13a83754b67f46b0a992ac7ebae25a2d03f59\" returns successfully" Nov 6 00:22:27.993111 kubelet[2397]: E1106 00:22:27.993064 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:27.997797 kubelet[2397]: E1106 00:22:27.996560 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:28.004583 containerd[1518]: time="2025-11-06T00:22:28.004535198Z" level=info msg="StartContainer for \"5b19b5812a11bbca1232c0332d1702933c9df4eb2f73f8f41d0c83eec0e6f320\" returns successfully" Nov 6 00:22:28.517209 kubelet[2397]: I1106 00:22:28.517172 2397 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:28.996751 kubelet[2397]: E1106 00:22:28.996647 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:28.997434 kubelet[2397]: E1106 00:22:28.997406 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:30.000857 kubelet[2397]: E1106 00:22:29.999548 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:30.742298 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 6 00:22:30.995797 kubelet[2397]: E1106 00:22:30.995637 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.003553 kubelet[2397]: E1106 00:22:31.003497 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.016128 kubelet[2397]: E1106 00:22:31.016076 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" not found" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.064976 kubelet[2397]: I1106 00:22:31.064928 2397 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.115282 kubelet[2397]: I1106 00:22:31.115241 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.129365 kubelet[2397]: E1106 00:22:31.129248 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.129365 kubelet[2397]: I1106 00:22:31.129314 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.133643 kubelet[2397]: E1106 00:22:31.133394 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.133643 kubelet[2397]: I1106 00:22:31.133430 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.137212 kubelet[2397]: E1106 00:22:31.137175 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:31.866794 kubelet[2397]: I1106 00:22:31.866647 2397 apiserver.go:52] "Watching apiserver" Nov 6 00:22:31.912325 kubelet[2397]: I1106 00:22:31.912271 2397 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:22:32.970203 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-7.scope)... Nov 6 00:22:32.970226 systemd[1]: Reloading... Nov 6 00:22:33.136814 kubelet[2397]: I1106 00:22:33.136518 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:33.146827 zram_generator::config[2724]: No configuration found. Nov 6 00:22:33.148876 kubelet[2397]: I1106 00:22:33.148463 2397 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 00:22:33.465507 systemd[1]: Reloading finished in 494 ms. Nov 6 00:22:33.505250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:33.517455 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:22:33.517973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:33.518067 systemd[1]: kubelet.service: Consumed 1.394s CPU time, 130.8M memory peak. Nov 6 00:22:33.520650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:33.859989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:33.875462 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:22:33.945245 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:33.945245 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:22:33.945245 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:33.945851 kubelet[2773]: I1106 00:22:33.945331 2773 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:22:33.959295 kubelet[2773]: I1106 00:22:33.959238 2773 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:22:33.959295 kubelet[2773]: I1106 00:22:33.959269 2773 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:22:33.959613 kubelet[2773]: I1106 00:22:33.959587 2773 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:22:33.962493 kubelet[2773]: I1106 00:22:33.961860 2773 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:22:33.965554 kubelet[2773]: I1106 00:22:33.965343 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:22:33.974791 kubelet[2773]: I1106 00:22:33.974706 2773 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:22:33.979341 kubelet[2773]: I1106 00:22:33.979258 2773 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:22:33.979792 kubelet[2773]: I1106 00:22:33.979657 2773 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:22:33.980353 kubelet[2773]: I1106 00:22:33.979693 2773 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:22:33.980353 kubelet[2773]: I1106 00:22:33.979966 2773 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:22:33.980353 kubelet[2773]: I1106 00:22:33.979984 2773 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:22:33.982320 kubelet[2773]: I1106 00:22:33.980399 2773 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:33.982875 kubelet[2773]: I1106 00:22:33.982844 2773 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:22:33.982875 kubelet[2773]: I1106 00:22:33.982875 2773 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:22:33.983029 kubelet[2773]: I1106 00:22:33.982915 2773 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:22:33.983029 kubelet[2773]: I1106 00:22:33.982941 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:22:33.997469 kubelet[2773]: I1106 00:22:33.996490 2773 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:22:34.000142 kubelet[2773]: I1106 00:22:34.000030 2773 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:22:34.037696 kubelet[2773]: I1106 00:22:34.037657 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:22:34.037926 kubelet[2773]: I1106 00:22:34.037729 2773 server.go:1289] "Started kubelet" Nov 6 00:22:34.044411 kubelet[2773]: I1106 00:22:34.044012 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:22:34.049889 kubelet[2773]: I1106 00:22:34.048890 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:22:34.050493 kubelet[2773]: I1106 00:22:34.050448 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:22:34.058170 kubelet[2773]: I1106 00:22:34.050886 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:22:34.061662 kubelet[2773]: I1106 00:22:34.061487 2773 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:22:34.061662 kubelet[2773]: I1106 00:22:34.051643 2773 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:22:34.061662 kubelet[2773]: I1106 00:22:34.060887 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:22:34.062773 kubelet[2773]: I1106 00:22:34.051371 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:22:34.062773 kubelet[2773]: I1106 00:22:34.061238 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:22:34.062773 kubelet[2773]: I1106 00:22:34.062440 2773 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:22:34.071126 kubelet[2773]: E1106 00:22:34.071073 2773 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:22:34.075377 kubelet[2773]: I1106 00:22:34.075348 2773 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:22:34.075377 kubelet[2773]: I1106 00:22:34.075370 2773 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:22:34.112936 kubelet[2773]: I1106 00:22:34.112736 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:22:34.121988 kubelet[2773]: I1106 00:22:34.121448 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:22:34.121988 kubelet[2773]: I1106 00:22:34.121856 2773 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:22:34.122929 kubelet[2773]: I1106 00:22:34.122897 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:22:34.122929 kubelet[2773]: I1106 00:22:34.122923 2773 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:22:34.123129 kubelet[2773]: E1106 00:22:34.122994 2773 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:22:34.188189 kubelet[2773]: I1106 00:22:34.188140 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:22:34.188513 kubelet[2773]: I1106 00:22:34.188399 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:22:34.188513 kubelet[2773]: I1106 00:22:34.188423 2773 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:34.189058 kubelet[2773]: I1106 00:22:34.188985 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:22:34.189058 kubelet[2773]: I1106 00:22:34.189006 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:22:34.189058 kubelet[2773]: I1106 00:22:34.189031 2773 policy_none.go:49] "None policy: Start" Nov 6 00:22:34.189431 kubelet[2773]: I1106 00:22:34.189279 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:22:34.189431 kubelet[2773]: I1106 00:22:34.189302 2773 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:22:34.189727 kubelet[2773]: I1106 00:22:34.189657 2773 state_mem.go:75] "Updated machine memory state" Nov 6 00:22:34.203687 kubelet[2773]: E1106 00:22:34.203360 2773 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:22:34.205519 kubelet[2773]: I1106 00:22:34.204809 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:22:34.206108 kubelet[2773]: I1106 00:22:34.204833 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:22:34.207597 kubelet[2773]: I1106 00:22:34.207578 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:22:34.210384 kubelet[2773]: E1106 00:22:34.210285 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:22:34.224540 kubelet[2773]: I1106 00:22:34.223769 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.224540 kubelet[2773]: I1106 00:22:34.224241 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.225102 kubelet[2773]: I1106 00:22:34.225081 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.238903 kubelet[2773]: I1106 00:22:34.238877 2773 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 00:22:34.247555 kubelet[2773]: I1106 00:22:34.247499 2773 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 00:22:34.249777 kubelet[2773]: I1106 00:22:34.249655 2773 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 00:22:34.250036 kubelet[2773]: E1106 00:22:34.249928 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.262779 kubelet[2773]: I1106 00:22:34.262678 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263142 kubelet[2773]: I1106 00:22:34.262987 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263142 kubelet[2773]: I1106 00:22:34.263066 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263353 kubelet[2773]: I1106 00:22:34.263098 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263353 kubelet[2773]: I1106 00:22:34.263335 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68dab48e38e2b2078cd5aee421afdb67-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"68dab48e38e2b2078cd5aee421afdb67\") " pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263534 kubelet[2773]: I1106 00:22:34.263482 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263534 kubelet[2773]: I1106 00:22:34.263505 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263694 kubelet[2773]: I1106 00:22:34.263641 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4d57ffbe38d31533367d5ee20db4da-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"1b4d57ffbe38d31533367d5ee20db4da\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.263694 kubelet[2773]: I1106 00:22:34.263666 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ec2c8fa39bdcb257b9e1d88596349d6-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" (UID: \"8ec2c8fa39bdcb257b9e1d88596349d6\") " pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.324132 kubelet[2773]: I1106 00:22:34.324012 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.336352 kubelet[2773]: I1106 00:22:34.336132 2773 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.336352 kubelet[2773]: I1106 00:22:34.336260 2773 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:22:34.991422 kubelet[2773]: I1106 00:22:34.991088 2773 apiserver.go:52] "Watching apiserver" Nov 6 00:22:35.062526 kubelet[2773]: I1106 00:22:35.062469 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:22:35.062970 kubelet[2773]: I1106 00:22:35.062719 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" podStartSLOduration=1.062700664 podStartE2EDuration="1.062700664s" podCreationTimestamp="2025-11-06 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:35.044559009 +0000 UTC m=+1.160737935" watchObservedRunningTime="2025-11-06 00:22:35.062700664 +0000 UTC m=+1.178879583" Nov 6 00:22:35.081864 kubelet[2773]: I1106 00:22:35.081784 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" podStartSLOduration=2.081641916 podStartE2EDuration="2.081641916s" podCreationTimestamp="2025-11-06 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:35.064803863 +0000 UTC m=+1.180982788" watchObservedRunningTime="2025-11-06 00:22:35.081641916 +0000 UTC m=+1.197820842" Nov 6 00:22:35.083686 kubelet[2773]: I1106 00:22:35.083620 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" podStartSLOduration=1.083596313 podStartE2EDuration="1.083596313s" podCreationTimestamp="2025-11-06 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:35.078350133 +0000 UTC m=+1.194529058" watchObservedRunningTime="2025-11-06 00:22:35.083596313 +0000 UTC m=+1.199775240" Nov 6 00:22:38.625007 kubelet[2773]: I1106 00:22:38.624959 2773 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:22:38.625930 containerd[1518]: time="2025-11-06T00:22:38.625888398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:22:38.626494 kubelet[2773]: I1106 00:22:38.626361 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:22:39.134996 systemd[1]: Created slice kubepods-besteffort-pod4a75d517_062f_4895_abea_739ee1e3f77d.slice - libcontainer container kubepods-besteffort-pod4a75d517_062f_4895_abea_739ee1e3f77d.slice. Nov 6 00:22:39.190601 kubelet[2773]: I1106 00:22:39.190362 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a75d517-062f-4895-abea-739ee1e3f77d-kube-proxy\") pod \"kube-proxy-zvkmm\" (UID: \"4a75d517-062f-4895-abea-739ee1e3f77d\") " pod="kube-system/kube-proxy-zvkmm" Nov 6 00:22:39.190601 kubelet[2773]: I1106 00:22:39.190415 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a75d517-062f-4895-abea-739ee1e3f77d-xtables-lock\") pod \"kube-proxy-zvkmm\" (UID: \"4a75d517-062f-4895-abea-739ee1e3f77d\") " pod="kube-system/kube-proxy-zvkmm" Nov 6 00:22:39.190601 kubelet[2773]: I1106 00:22:39.190487 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a75d517-062f-4895-abea-739ee1e3f77d-lib-modules\") pod \"kube-proxy-zvkmm\" (UID: \"4a75d517-062f-4895-abea-739ee1e3f77d\") " pod="kube-system/kube-proxy-zvkmm" Nov 6 00:22:39.190601 kubelet[2773]: I1106 00:22:39.190521 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxttp\" (UniqueName: \"kubernetes.io/projected/4a75d517-062f-4895-abea-739ee1e3f77d-kube-api-access-hxttp\") pod \"kube-proxy-zvkmm\" (UID: \"4a75d517-062f-4895-abea-739ee1e3f77d\") " pod="kube-system/kube-proxy-zvkmm" Nov 6 00:22:39.447379 containerd[1518]: time="2025-11-06T00:22:39.447122914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvkmm,Uid:4a75d517-062f-4895-abea-739ee1e3f77d,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:39.480686 containerd[1518]: time="2025-11-06T00:22:39.480626916Z" level=info msg="connecting to shim 6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6" address="unix:///run/containerd/s/911f6adef3f97679501db7645d86cdd770809be2f1396f3adf23e14f54117bc4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:39.526971 systemd[1]: Started cri-containerd-6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6.scope - libcontainer container 6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6. Nov 6 00:22:39.579617 containerd[1518]: time="2025-11-06T00:22:39.579567928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvkmm,Uid:4a75d517-062f-4895-abea-739ee1e3f77d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6\"" Nov 6 00:22:39.588140 containerd[1518]: time="2025-11-06T00:22:39.588094435Z" level=info msg="CreateContainer within sandbox \"6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:22:39.605499 containerd[1518]: time="2025-11-06T00:22:39.605446595Z" level=info msg="Container 00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:39.625851 containerd[1518]: time="2025-11-06T00:22:39.625796420Z" level=info msg="CreateContainer within sandbox \"6278ff36653aaf01eb3b383cc64b58a9ba42eb082d7200123d4a9cf4dd5103a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8\"" Nov 6 00:22:39.630649 containerd[1518]: time="2025-11-06T00:22:39.630565875Z" level=info msg="StartContainer for \"00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8\"" Nov 6 00:22:39.639525 containerd[1518]: time="2025-11-06T00:22:39.639446162Z" level=info msg="connecting to shim 00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8" address="unix:///run/containerd/s/911f6adef3f97679501db7645d86cdd770809be2f1396f3adf23e14f54117bc4" protocol=ttrpc version=3 Nov 6 00:22:39.674965 systemd[1]: Started cri-containerd-00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8.scope - libcontainer container 00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8. Nov 6 00:22:39.776369 containerd[1518]: time="2025-11-06T00:22:39.776089271Z" level=info msg="StartContainer for \"00906ff0253b8714d492960c61a36913480568cae13c3d3c5256867e1c94b6c8\" returns successfully" Nov 6 00:22:39.810267 systemd[1]: Created slice kubepods-besteffort-pod939fef87_6867_4fca_8d3b_9bd20abc70b9.slice - libcontainer container kubepods-besteffort-pod939fef87_6867_4fca_8d3b_9bd20abc70b9.slice. Nov 6 00:22:39.896118 kubelet[2773]: I1106 00:22:39.896061 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/939fef87-6867-4fca-8d3b-9bd20abc70b9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pl6x7\" (UID: \"939fef87-6867-4fca-8d3b-9bd20abc70b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-pl6x7" Nov 6 00:22:39.896118 kubelet[2773]: I1106 00:22:39.896138 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/939fef87-6867-4fca-8d3b-9bd20abc70b9-kube-api-access-2xsh5\") pod \"tigera-operator-7dcd859c48-pl6x7\" (UID: \"939fef87-6867-4fca-8d3b-9bd20abc70b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-pl6x7" Nov 6 00:22:40.120225 containerd[1518]: time="2025-11-06T00:22:40.120079695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pl6x7,Uid:939fef87-6867-4fca-8d3b-9bd20abc70b9,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:22:40.153045 containerd[1518]: time="2025-11-06T00:22:40.152968300Z" level=info msg="connecting to shim eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5" address="unix:///run/containerd/s/54ea5bfdb22cefae4c8e1e9452b0e785d0466aacdb0606c7660d97f647b938b0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:40.204944 systemd[1]: Started cri-containerd-eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5.scope - libcontainer container eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5. Nov 6 00:22:40.300620 containerd[1518]: time="2025-11-06T00:22:40.300545385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pl6x7,Uid:939fef87-6867-4fca-8d3b-9bd20abc70b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5\"" Nov 6 00:22:40.305888 containerd[1518]: time="2025-11-06T00:22:40.305838839Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:22:41.626589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986740824.mount: Deactivated successfully. Nov 6 00:22:41.943895 kubelet[2773]: I1106 00:22:41.943519 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zvkmm" podStartSLOduration=2.943497066 podStartE2EDuration="2.943497066s" podCreationTimestamp="2025-11-06 00:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:40.206355036 +0000 UTC m=+6.322533960" watchObservedRunningTime="2025-11-06 00:22:41.943497066 +0000 UTC m=+8.059675991" Nov 6 00:22:42.623042 containerd[1518]: time="2025-11-06T00:22:42.622967375Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:42.624525 containerd[1518]: time="2025-11-06T00:22:42.624273257Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:22:42.625622 containerd[1518]: time="2025-11-06T00:22:42.625581257Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:42.628499 containerd[1518]: time="2025-11-06T00:22:42.628458576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:42.629478 containerd[1518]: time="2025-11-06T00:22:42.629439810Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.322612354s" Nov 6 00:22:42.629642 containerd[1518]: time="2025-11-06T00:22:42.629618474Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:22:42.635099 containerd[1518]: time="2025-11-06T00:22:42.635065052Z" level=info msg="CreateContainer within sandbox \"eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:22:42.645288 containerd[1518]: time="2025-11-06T00:22:42.645254531Z" level=info msg="Container d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:42.656229 containerd[1518]: time="2025-11-06T00:22:42.656179829Z" level=info msg="CreateContainer within sandbox \"eea2987c3ce3f62c621bf7bdd59192f9a05b89cb38690364c3c9dca107068be5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a\"" Nov 6 00:22:42.657067 containerd[1518]: time="2025-11-06T00:22:42.656858787Z" level=info msg="StartContainer for \"d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a\"" Nov 6 00:22:42.658113 containerd[1518]: time="2025-11-06T00:22:42.658049612Z" level=info msg="connecting to shim d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a" address="unix:///run/containerd/s/54ea5bfdb22cefae4c8e1e9452b0e785d0466aacdb0606c7660d97f647b938b0" protocol=ttrpc version=3 Nov 6 00:22:42.690982 systemd[1]: Started cri-containerd-d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a.scope - libcontainer container d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a. Nov 6 00:22:42.737051 containerd[1518]: time="2025-11-06T00:22:42.736885671Z" level=info msg="StartContainer for \"d936a8c1dcd0761549f160abcffc42f2370cbde19246353e0f9dc946b119de2a\" returns successfully" Nov 6 00:22:44.453894 update_engine[1506]: I20251106 00:22:44.453809 1506 update_attempter.cc:509] Updating boot flags... Nov 6 00:22:45.737659 kubelet[2773]: I1106 00:22:45.737570 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pl6x7" podStartSLOduration=4.410366004 podStartE2EDuration="6.737549988s" podCreationTimestamp="2025-11-06 00:22:39 +0000 UTC" firstStartedPulling="2025-11-06 00:22:40.303478204 +0000 UTC m=+6.419657104" lastFinishedPulling="2025-11-06 00:22:42.630662175 +0000 UTC m=+8.746841088" observedRunningTime="2025-11-06 00:22:43.207667058 +0000 UTC m=+9.323845982" watchObservedRunningTime="2025-11-06 00:22:45.737549988 +0000 UTC m=+11.853728914" Nov 6 00:22:50.122011 sudo[1830]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:50.167901 sshd[1829]: Connection closed by 147.75.109.163 port 59386 Nov 6 00:22:50.169029 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:50.184228 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:22:50.186496 systemd[1]: sshd@6-10.128.0.9:22-147.75.109.163:59386.service: Deactivated successfully. Nov 6 00:22:50.196669 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:22:50.197047 systemd[1]: session-7.scope: Consumed 7.399s CPU time, 233.4M memory peak. Nov 6 00:22:50.203532 systemd-logind[1500]: Removed session 7. Nov 6 00:22:58.226497 systemd[1]: Created slice kubepods-besteffort-pod2e2b049c_b17f_4487_b356_352644d73d72.slice - libcontainer container kubepods-besteffort-pod2e2b049c_b17f_4487_b356_352644d73d72.slice. Nov 6 00:22:58.333007 kubelet[2773]: I1106 00:22:58.332955 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e2b049c-b17f-4487-b356-352644d73d72-tigera-ca-bundle\") pod \"calico-typha-786fd6b447-frm2s\" (UID: \"2e2b049c-b17f-4487-b356-352644d73d72\") " pod="calico-system/calico-typha-786fd6b447-frm2s" Nov 6 00:22:58.333867 kubelet[2773]: I1106 00:22:58.333704 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rcs\" (UniqueName: \"kubernetes.io/projected/2e2b049c-b17f-4487-b356-352644d73d72-kube-api-access-l8rcs\") pod \"calico-typha-786fd6b447-frm2s\" (UID: \"2e2b049c-b17f-4487-b356-352644d73d72\") " pod="calico-system/calico-typha-786fd6b447-frm2s" Nov 6 00:22:58.333867 kubelet[2773]: I1106 00:22:58.333805 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2e2b049c-b17f-4487-b356-352644d73d72-typha-certs\") pod \"calico-typha-786fd6b447-frm2s\" (UID: \"2e2b049c-b17f-4487-b356-352644d73d72\") " pod="calico-system/calico-typha-786fd6b447-frm2s" Nov 6 00:22:58.537782 containerd[1518]: time="2025-11-06T00:22:58.537333071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786fd6b447-frm2s,Uid:2e2b049c-b17f-4487-b356-352644d73d72,Namespace:calico-system,Attempt:0,}" Nov 6 00:22:58.541765 systemd[1]: Created slice kubepods-besteffort-podec74ff2d_cb37_40e6_a595_f25902cf48e1.slice - libcontainer container kubepods-besteffort-podec74ff2d_cb37_40e6_a595_f25902cf48e1.slice. Nov 6 00:22:58.587777 containerd[1518]: time="2025-11-06T00:22:58.587714362Z" level=info msg="connecting to shim 1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2" address="unix:///run/containerd/s/0282178022c007b5cbfe380b74e9f7626866eda6ef400401838a5cc9667481d6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:58.629556 systemd[1]: Started cri-containerd-1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2.scope - libcontainer container 1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2. Nov 6 00:22:58.636395 kubelet[2773]: I1106 00:22:58.635970 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec74ff2d-cb37-40e6-a595-f25902cf48e1-tigera-ca-bundle\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.636395 kubelet[2773]: I1106 00:22:58.636024 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-cni-log-dir\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.636395 kubelet[2773]: I1106 00:22:58.636053 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-lib-modules\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.636395 kubelet[2773]: I1106 00:22:58.636079 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-cni-bin-dir\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.636395 kubelet[2773]: I1106 00:22:58.636104 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-xtables-lock\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637372 kubelet[2773]: I1106 00:22:58.636128 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-var-lib-calico\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637372 kubelet[2773]: I1106 00:22:58.636161 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-268td\" (UniqueName: \"kubernetes.io/projected/ec74ff2d-cb37-40e6-a595-f25902cf48e1-kube-api-access-268td\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637372 kubelet[2773]: I1106 00:22:58.636191 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-policysync\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637372 kubelet[2773]: I1106 00:22:58.636225 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-var-run-calico\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637372 kubelet[2773]: I1106 00:22:58.636259 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-flexvol-driver-host\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637892 kubelet[2773]: I1106 00:22:58.636297 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec74ff2d-cb37-40e6-a595-f25902cf48e1-node-certs\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.637892 kubelet[2773]: I1106 00:22:58.636328 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec74ff2d-cb37-40e6-a595-f25902cf48e1-cni-net-dir\") pod \"calico-node-rpdt6\" (UID: \"ec74ff2d-cb37-40e6-a595-f25902cf48e1\") " pod="calico-system/calico-node-rpdt6" Nov 6 00:22:58.709602 containerd[1518]: time="2025-11-06T00:22:58.709506184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786fd6b447-frm2s,Uid:2e2b049c-b17f-4487-b356-352644d73d72,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2\"" Nov 6 00:22:58.714080 containerd[1518]: time="2025-11-06T00:22:58.713996307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:22:58.751591 kubelet[2773]: E1106 00:22:58.751143 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:22:58.757794 kubelet[2773]: E1106 00:22:58.754880 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.757794 kubelet[2773]: W1106 00:22:58.754907 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.757794 kubelet[2773]: E1106 00:22:58.754961 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.766537 kubelet[2773]: I1106 00:22:58.766485 2773 status_manager.go:895] "Failed to get status for pod" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" pod="calico-system/csi-node-driver-zgmkz" err="pods \"csi-node-driver-zgmkz\" is forbidden: User \"system:node:ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' and this object" Nov 6 00:22:58.786958 kubelet[2773]: E1106 00:22:58.786862 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.787256 kubelet[2773]: W1106 00:22:58.787055 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.787256 kubelet[2773]: E1106 00:22:58.787092 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.807621 kubelet[2773]: E1106 00:22:58.807492 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.807621 kubelet[2773]: W1106 00:22:58.807518 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.807621 kubelet[2773]: E1106 00:22:58.807543 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.808818 kubelet[2773]: E1106 00:22:58.808790 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.808818 kubelet[2773]: W1106 00:22:58.808813 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.809222 kubelet[2773]: E1106 00:22:58.808835 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.809288 kubelet[2773]: E1106 00:22:58.809278 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.809344 kubelet[2773]: W1106 00:22:58.809293 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.809344 kubelet[2773]: E1106 00:22:58.809338 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.809738 kubelet[2773]: E1106 00:22:58.809716 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.809738 kubelet[2773]: W1106 00:22:58.809734 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.809926 kubelet[2773]: E1106 00:22:58.809770 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.810106 kubelet[2773]: E1106 00:22:58.810087 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.810106 kubelet[2773]: W1106 00:22:58.810107 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.810338 kubelet[2773]: E1106 00:22:58.810123 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.810571 kubelet[2773]: E1106 00:22:58.810550 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.810571 kubelet[2773]: W1106 00:22:58.810571 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.810834 kubelet[2773]: E1106 00:22:58.810587 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.811012 kubelet[2773]: E1106 00:22:58.810995 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.811263 kubelet[2773]: W1106 00:22:58.811106 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.811263 kubelet[2773]: E1106 00:22:58.811131 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.811583 kubelet[2773]: E1106 00:22:58.811566 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.811779 kubelet[2773]: W1106 00:22:58.811664 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.811779 kubelet[2773]: E1106 00:22:58.811684 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.812479 kubelet[2773]: E1106 00:22:58.812454 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.812479 kubelet[2773]: W1106 00:22:58.812477 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.812651 kubelet[2773]: E1106 00:22:58.812494 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.812858 kubelet[2773]: E1106 00:22:58.812837 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.812858 kubelet[2773]: W1106 00:22:58.812856 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.813090 kubelet[2773]: E1106 00:22:58.812873 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.813212 kubelet[2773]: E1106 00:22:58.813149 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.813212 kubelet[2773]: W1106 00:22:58.813171 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.813212 kubelet[2773]: E1106 00:22:58.813186 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.813535 kubelet[2773]: E1106 00:22:58.813482 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.813535 kubelet[2773]: W1106 00:22:58.813496 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.813535 kubelet[2773]: E1106 00:22:58.813512 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.814021 kubelet[2773]: E1106 00:22:58.814001 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.814235 kubelet[2773]: W1106 00:22:58.814165 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.814235 kubelet[2773]: E1106 00:22:58.814202 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.815084 kubelet[2773]: E1106 00:22:58.815061 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.815313 kubelet[2773]: W1106 00:22:58.815250 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.815524 kubelet[2773]: E1106 00:22:58.815286 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.816101 kubelet[2773]: E1106 00:22:58.816075 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.816101 kubelet[2773]: W1106 00:22:58.816097 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.816330 kubelet[2773]: E1106 00:22:58.816123 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.816438 kubelet[2773]: E1106 00:22:58.816410 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.816438 kubelet[2773]: W1106 00:22:58.816424 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.816600 kubelet[2773]: E1106 00:22:58.816443 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.816939 kubelet[2773]: E1106 00:22:58.816741 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.816939 kubelet[2773]: W1106 00:22:58.816778 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.816939 kubelet[2773]: E1106 00:22:58.816795 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.817161 kubelet[2773]: E1106 00:22:58.817129 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.817161 kubelet[2773]: W1106 00:22:58.817159 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.817272 kubelet[2773]: E1106 00:22:58.817175 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.817528 kubelet[2773]: E1106 00:22:58.817510 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.817528 kubelet[2773]: W1106 00:22:58.817526 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.817676 kubelet[2773]: E1106 00:22:58.817541 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.817887 kubelet[2773]: E1106 00:22:58.817860 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.817887 kubelet[2773]: W1106 00:22:58.817884 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.818029 kubelet[2773]: E1106 00:22:58.817901 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.840626 kubelet[2773]: E1106 00:22:58.840341 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.840626 kubelet[2773]: W1106 00:22:58.840368 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.840626 kubelet[2773]: E1106 00:22:58.840393 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.840626 kubelet[2773]: I1106 00:22:58.840431 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70152e9b-de49-41f1-96dc-b8cd479787b2-registration-dir\") pod \"csi-node-driver-zgmkz\" (UID: \"70152e9b-de49-41f1-96dc-b8cd479787b2\") " pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:22:58.841479 kubelet[2773]: E1106 00:22:58.841247 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.841479 kubelet[2773]: W1106 00:22:58.841269 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.841863 kubelet[2773]: E1106 00:22:58.841288 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.841863 kubelet[2773]: I1106 00:22:58.841802 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70152e9b-de49-41f1-96dc-b8cd479787b2-kubelet-dir\") pod \"csi-node-driver-zgmkz\" (UID: \"70152e9b-de49-41f1-96dc-b8cd479787b2\") " pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:22:58.842146 kubelet[2773]: E1106 00:22:58.842113 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.842466 kubelet[2773]: W1106 00:22:58.842128 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.842466 kubelet[2773]: E1106 00:22:58.842177 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.843544 kubelet[2773]: E1106 00:22:58.843516 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.843544 kubelet[2773]: W1106 00:22:58.843543 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.844033 kubelet[2773]: E1106 00:22:58.843563 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.844921 kubelet[2773]: E1106 00:22:58.844114 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.844921 kubelet[2773]: W1106 00:22:58.844129 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.844921 kubelet[2773]: E1106 00:22:58.844146 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.844921 kubelet[2773]: I1106 00:22:58.844187 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70152e9b-de49-41f1-96dc-b8cd479787b2-socket-dir\") pod \"csi-node-driver-zgmkz\" (UID: \"70152e9b-de49-41f1-96dc-b8cd479787b2\") " pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:22:58.845498 kubelet[2773]: E1106 00:22:58.845382 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.845498 kubelet[2773]: W1106 00:22:58.845403 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.845498 kubelet[2773]: E1106 00:22:58.845422 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.846103 kubelet[2773]: E1106 00:22:58.845928 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.846103 kubelet[2773]: W1106 00:22:58.845959 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.846103 kubelet[2773]: E1106 00:22:58.845976 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.846934 kubelet[2773]: E1106 00:22:58.846915 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.847041 kubelet[2773]: W1106 00:22:58.847024 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.847199 kubelet[2773]: E1106 00:22:58.847171 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.847341 kubelet[2773]: I1106 00:22:58.847321 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70152e9b-de49-41f1-96dc-b8cd479787b2-varrun\") pod \"csi-node-driver-zgmkz\" (UID: \"70152e9b-de49-41f1-96dc-b8cd479787b2\") " pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:22:58.847987 kubelet[2773]: E1106 00:22:58.847858 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.847987 kubelet[2773]: W1106 00:22:58.847894 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.847987 kubelet[2773]: E1106 00:22:58.847914 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.848714 kubelet[2773]: E1106 00:22:58.848587 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.848714 kubelet[2773]: W1106 00:22:58.848606 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.848714 kubelet[2773]: E1106 00:22:58.848624 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.849343 kubelet[2773]: E1106 00:22:58.849218 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.849343 kubelet[2773]: W1106 00:22:58.849237 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.849343 kubelet[2773]: E1106 00:22:58.849253 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.849343 kubelet[2773]: I1106 00:22:58.849292 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfdh\" (UniqueName: \"kubernetes.io/projected/70152e9b-de49-41f1-96dc-b8cd479787b2-kube-api-access-9nfdh\") pod \"csi-node-driver-zgmkz\" (UID: \"70152e9b-de49-41f1-96dc-b8cd479787b2\") " pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:22:58.849710 kubelet[2773]: E1106 00:22:58.849689 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.849882 kubelet[2773]: W1106 00:22:58.849838 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.849882 kubelet[2773]: E1106 00:22:58.849863 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.850996 kubelet[2773]: E1106 00:22:58.850971 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.850996 kubelet[2773]: W1106 00:22:58.850993 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.851253 kubelet[2773]: E1106 00:22:58.851013 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.852551 kubelet[2773]: E1106 00:22:58.852523 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.852551 kubelet[2773]: W1106 00:22:58.852549 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.852715 kubelet[2773]: E1106 00:22:58.852566 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.852886 kubelet[2773]: E1106 00:22:58.852857 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.852886 kubelet[2773]: W1106 00:22:58.852880 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.853041 kubelet[2773]: E1106 00:22:58.852895 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.853623 containerd[1518]: time="2025-11-06T00:22:58.853536577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpdt6,Uid:ec74ff2d-cb37-40e6-a595-f25902cf48e1,Namespace:calico-system,Attempt:0,}" Nov 6 00:22:58.890585 containerd[1518]: time="2025-11-06T00:22:58.890517493Z" level=info msg="connecting to shim df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20" address="unix:///run/containerd/s/ce063ad080255ae27f12f630ebce9097a237ee79c9ded9455b68061049ec03e9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:58.935982 systemd[1]: Started cri-containerd-df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20.scope - libcontainer container df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20. Nov 6 00:22:58.950711 kubelet[2773]: E1106 00:22:58.950676 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.950711 kubelet[2773]: W1106 00:22:58.950705 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.950928 kubelet[2773]: E1106 00:22:58.950732 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.951476 kubelet[2773]: E1106 00:22:58.951429 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.951476 kubelet[2773]: W1106 00:22:58.951450 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.951476 kubelet[2773]: E1106 00:22:58.951471 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.952149 kubelet[2773]: E1106 00:22:58.952106 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.952149 kubelet[2773]: W1106 00:22:58.952128 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.952149 kubelet[2773]: E1106 00:22:58.952146 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.952930 kubelet[2773]: E1106 00:22:58.952904 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.953040 kubelet[2773]: W1106 00:22:58.952931 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.953040 kubelet[2773]: E1106 00:22:58.952950 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.953368 kubelet[2773]: E1106 00:22:58.953335 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.953368 kubelet[2773]: W1106 00:22:58.953354 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.953511 kubelet[2773]: E1106 00:22:58.953370 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.953731 kubelet[2773]: E1106 00:22:58.953710 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.953731 kubelet[2773]: W1106 00:22:58.953728 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.953880 kubelet[2773]: E1106 00:22:58.953744 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.954207 kubelet[2773]: E1106 00:22:58.954184 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.954207 kubelet[2773]: W1106 00:22:58.954203 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.954372 kubelet[2773]: E1106 00:22:58.954220 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.954687 kubelet[2773]: E1106 00:22:58.954665 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.954687 kubelet[2773]: W1106 00:22:58.954683 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.954861 kubelet[2773]: E1106 00:22:58.954700 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.955183 kubelet[2773]: E1106 00:22:58.955132 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.955183 kubelet[2773]: W1106 00:22:58.955152 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.955183 kubelet[2773]: E1106 00:22:58.955169 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.955868 kubelet[2773]: E1106 00:22:58.955842 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.956069 kubelet[2773]: W1106 00:22:58.955959 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.956069 kubelet[2773]: E1106 00:22:58.955981 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.956609 kubelet[2773]: E1106 00:22:58.956528 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.956609 kubelet[2773]: W1106 00:22:58.956547 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.956609 kubelet[2773]: E1106 00:22:58.956564 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.957230 kubelet[2773]: E1106 00:22:58.957170 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.957471 kubelet[2773]: W1106 00:22:58.957341 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.957471 kubelet[2773]: E1106 00:22:58.957369 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.958432 kubelet[2773]: E1106 00:22:58.958325 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.958432 kubelet[2773]: W1106 00:22:58.958345 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.958432 kubelet[2773]: E1106 00:22:58.958362 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.960064 kubelet[2773]: E1106 00:22:58.959993 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.960064 kubelet[2773]: W1106 00:22:58.960056 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.960237 kubelet[2773]: E1106 00:22:58.960077 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.960467 kubelet[2773]: E1106 00:22:58.960442 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.960467 kubelet[2773]: W1106 00:22:58.960466 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.960590 kubelet[2773]: E1106 00:22:58.960483 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.960915 kubelet[2773]: E1106 00:22:58.960891 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.960915 kubelet[2773]: W1106 00:22:58.960912 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.961059 kubelet[2773]: E1106 00:22:58.960937 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.961286 kubelet[2773]: E1106 00:22:58.961265 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.961286 kubelet[2773]: W1106 00:22:58.961284 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.961421 kubelet[2773]: E1106 00:22:58.961300 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.961723 kubelet[2773]: E1106 00:22:58.961695 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.961723 kubelet[2773]: W1106 00:22:58.961719 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.961892 kubelet[2773]: E1106 00:22:58.961736 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.962223 kubelet[2773]: E1106 00:22:58.962116 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.962223 kubelet[2773]: W1106 00:22:58.962134 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.962223 kubelet[2773]: E1106 00:22:58.962150 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.962888 kubelet[2773]: E1106 00:22:58.962865 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.962888 kubelet[2773]: W1106 00:22:58.962887 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.963070 kubelet[2773]: E1106 00:22:58.962905 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.963480 kubelet[2773]: E1106 00:22:58.963334 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.963480 kubelet[2773]: W1106 00:22:58.963351 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.963480 kubelet[2773]: E1106 00:22:58.963368 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.964223 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.966103 kubelet[2773]: W1106 00:22:58.964818 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.964844 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.965201 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.966103 kubelet[2773]: W1106 00:22:58.965215 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.965234 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.965589 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.966103 kubelet[2773]: W1106 00:22:58.965602 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.965617 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.966103 kubelet[2773]: E1106 00:22:58.966030 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.966557 kubelet[2773]: W1106 00:22:58.966085 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.966557 kubelet[2773]: E1106 00:22:58.966104 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.989408 kubelet[2773]: E1106 00:22:58.989312 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:22:58.989408 kubelet[2773]: W1106 00:22:58.989338 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:22:58.989408 kubelet[2773]: E1106 00:22:58.989360 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:22:58.996685 containerd[1518]: time="2025-11-06T00:22:58.996562670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpdt6,Uid:ec74ff2d-cb37-40e6-a595-f25902cf48e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\"" Nov 6 00:22:59.674001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667119242.mount: Deactivated successfully. Nov 6 00:23:00.124541 kubelet[2773]: E1106 00:23:00.124491 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:00.850315 containerd[1518]: time="2025-11-06T00:23:00.850245031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:00.851873 containerd[1518]: time="2025-11-06T00:23:00.851630507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:23:00.853041 containerd[1518]: time="2025-11-06T00:23:00.853004576Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:00.856912 containerd[1518]: time="2025-11-06T00:23:00.856870544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:00.857669 containerd[1518]: time="2025-11-06T00:23:00.857632492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.143589658s" Nov 6 00:23:00.857943 containerd[1518]: time="2025-11-06T00:23:00.857821356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:23:00.860847 containerd[1518]: time="2025-11-06T00:23:00.860082251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:23:00.893352 containerd[1518]: time="2025-11-06T00:23:00.893296185Z" level=info msg="CreateContainer within sandbox \"1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:23:00.907600 containerd[1518]: time="2025-11-06T00:23:00.902996196Z" level=info msg="Container 2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:00.914909 containerd[1518]: time="2025-11-06T00:23:00.914858968Z" level=info msg="CreateContainer within sandbox \"1cd4fd97f016b039be9ac316428ab400a1371ed87845cf3bf474b17cc30243f2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b\"" Nov 6 00:23:00.916126 containerd[1518]: time="2025-11-06T00:23:00.915576891Z" level=info msg="StartContainer for \"2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b\"" Nov 6 00:23:00.917436 containerd[1518]: time="2025-11-06T00:23:00.917397091Z" level=info msg="connecting to shim 2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b" address="unix:///run/containerd/s/0282178022c007b5cbfe380b74e9f7626866eda6ef400401838a5cc9667481d6" protocol=ttrpc version=3 Nov 6 00:23:00.956960 systemd[1]: Started cri-containerd-2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b.scope - libcontainer container 2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b. Nov 6 00:23:01.152935 containerd[1518]: time="2025-11-06T00:23:01.152241578Z" level=info msg="StartContainer for \"2181535efb7964a9fc7d8dd3c0362f563161e0e98dad773d745241c494f6806b\" returns successfully" Nov 6 00:23:01.287938 kubelet[2773]: I1106 00:23:01.287006 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-786fd6b447-frm2s" podStartSLOduration=1.1412723439999999 podStartE2EDuration="3.286980567s" podCreationTimestamp="2025-11-06 00:22:58 +0000 UTC" firstStartedPulling="2025-11-06 00:22:58.713404922 +0000 UTC m=+24.829583833" lastFinishedPulling="2025-11-06 00:23:00.859113147 +0000 UTC m=+26.975292056" observedRunningTime="2025-11-06 00:23:01.27627621 +0000 UTC m=+27.392455134" watchObservedRunningTime="2025-11-06 00:23:01.286980567 +0000 UTC m=+27.403159496" Nov 6 00:23:01.336975 kubelet[2773]: E1106 00:23:01.336895 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.336975 kubelet[2773]: W1106 00:23:01.336928 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.337586 kubelet[2773]: E1106 00:23:01.337274 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.337884 kubelet[2773]: E1106 00:23:01.337817 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.337884 kubelet[2773]: W1106 00:23:01.337838 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.339039 kubelet[2773]: E1106 00:23:01.338840 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.339547 kubelet[2773]: E1106 00:23:01.339500 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.339547 kubelet[2773]: W1106 00:23:01.339520 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.339956 kubelet[2773]: E1106 00:23:01.339749 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.340380 kubelet[2773]: E1106 00:23:01.340354 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.340571 kubelet[2773]: W1106 00:23:01.340469 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.340571 kubelet[2773]: E1106 00:23:01.340490 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.341134 kubelet[2773]: E1106 00:23:01.341061 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.341134 kubelet[2773]: W1106 00:23:01.341079 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.341134 kubelet[2773]: E1106 00:23:01.341096 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.342504 kubelet[2773]: E1106 00:23:01.342442 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.342504 kubelet[2773]: W1106 00:23:01.342462 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.342837 kubelet[2773]: E1106 00:23:01.342481 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.343204 kubelet[2773]: E1106 00:23:01.343180 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.343400 kubelet[2773]: W1106 00:23:01.343305 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.343400 kubelet[2773]: E1106 00:23:01.343328 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.343957 kubelet[2773]: E1106 00:23:01.343879 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.343957 kubelet[2773]: W1106 00:23:01.343898 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.343957 kubelet[2773]: E1106 00:23:01.343915 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.345284 kubelet[2773]: E1106 00:23:01.345184 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.345284 kubelet[2773]: W1106 00:23:01.345202 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.345284 kubelet[2773]: E1106 00:23:01.345219 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.346286 kubelet[2773]: E1106 00:23:01.346187 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.346286 kubelet[2773]: W1106 00:23:01.346205 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.346286 kubelet[2773]: E1106 00:23:01.346223 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.346871 kubelet[2773]: E1106 00:23:01.346744 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.346871 kubelet[2773]: W1106 00:23:01.346786 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.346871 kubelet[2773]: E1106 00:23:01.346803 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.348015 kubelet[2773]: E1106 00:23:01.347887 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.348015 kubelet[2773]: W1106 00:23:01.347909 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.348015 kubelet[2773]: E1106 00:23:01.347926 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.348572 kubelet[2773]: E1106 00:23:01.348471 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.348572 kubelet[2773]: W1106 00:23:01.348489 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.348572 kubelet[2773]: E1106 00:23:01.348506 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.349784 kubelet[2773]: E1106 00:23:01.349677 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.349784 kubelet[2773]: W1106 00:23:01.349733 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.350174 kubelet[2773]: E1106 00:23:01.349969 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.350694 kubelet[2773]: E1106 00:23:01.350560 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.351542 kubelet[2773]: W1106 00:23:01.351417 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.351542 kubelet[2773]: E1106 00:23:01.351446 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.373350 kubelet[2773]: E1106 00:23:01.373261 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.373350 kubelet[2773]: W1106 00:23:01.373289 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.373350 kubelet[2773]: E1106 00:23:01.373316 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.374226 kubelet[2773]: E1106 00:23:01.374156 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.374226 kubelet[2773]: W1106 00:23:01.374179 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.374226 kubelet[2773]: E1106 00:23:01.374200 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.375042 kubelet[2773]: E1106 00:23:01.374982 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.375042 kubelet[2773]: W1106 00:23:01.375000 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.375042 kubelet[2773]: E1106 00:23:01.375022 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.375706 kubelet[2773]: E1106 00:23:01.375652 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.375706 kubelet[2773]: W1106 00:23:01.375670 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.375706 kubelet[2773]: E1106 00:23:01.375686 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.376374 kubelet[2773]: E1106 00:23:01.376306 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.376374 kubelet[2773]: W1106 00:23:01.376324 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.376374 kubelet[2773]: E1106 00:23:01.376354 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.376992 kubelet[2773]: E1106 00:23:01.376935 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.376992 kubelet[2773]: W1106 00:23:01.376954 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.376992 kubelet[2773]: E1106 00:23:01.376971 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.377741 kubelet[2773]: E1106 00:23:01.377582 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.377741 kubelet[2773]: W1106 00:23:01.377601 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.377741 kubelet[2773]: E1106 00:23:01.377617 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.378230 kubelet[2773]: E1106 00:23:01.378177 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.378230 kubelet[2773]: W1106 00:23:01.378195 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.378230 kubelet[2773]: E1106 00:23:01.378211 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.378929 kubelet[2773]: E1106 00:23:01.378846 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.378929 kubelet[2773]: W1106 00:23:01.378890 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.378929 kubelet[2773]: E1106 00:23:01.378907 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.380021 kubelet[2773]: E1106 00:23:01.379965 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.380021 kubelet[2773]: W1106 00:23:01.379985 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.380021 kubelet[2773]: E1106 00:23:01.380002 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.380923 kubelet[2773]: E1106 00:23:01.380852 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.380923 kubelet[2773]: W1106 00:23:01.380885 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.380923 kubelet[2773]: E1106 00:23:01.380902 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.381565 kubelet[2773]: E1106 00:23:01.381509 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.381565 kubelet[2773]: W1106 00:23:01.381528 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.381565 kubelet[2773]: E1106 00:23:01.381544 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.383039 kubelet[2773]: E1106 00:23:01.382865 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.383039 kubelet[2773]: W1106 00:23:01.382885 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.383039 kubelet[2773]: E1106 00:23:01.382901 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.383586 kubelet[2773]: E1106 00:23:01.383447 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.383586 kubelet[2773]: W1106 00:23:01.383466 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.383586 kubelet[2773]: E1106 00:23:01.383484 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.385048 kubelet[2773]: E1106 00:23:01.385029 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.385179 kubelet[2773]: W1106 00:23:01.385159 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.385450 kubelet[2773]: E1106 00:23:01.385278 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.385990 kubelet[2773]: E1106 00:23:01.385971 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.387375 kubelet[2773]: W1106 00:23:01.386800 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.387375 kubelet[2773]: E1106 00:23:01.386827 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.387740 kubelet[2773]: E1106 00:23:01.387723 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.388281 kubelet[2773]: W1106 00:23:01.388258 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.388401 kubelet[2773]: E1106 00:23:01.388383 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.388819 kubelet[2773]: E1106 00:23:01.388798 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:01.388982 kubelet[2773]: W1106 00:23:01.388962 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:01.389096 kubelet[2773]: E1106 00:23:01.389078 2773 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:01.913952 containerd[1518]: time="2025-11-06T00:23:01.913892362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:01.915200 containerd[1518]: time="2025-11-06T00:23:01.915134612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:23:01.916971 containerd[1518]: time="2025-11-06T00:23:01.916919995Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:01.919895 containerd[1518]: time="2025-11-06T00:23:01.919832202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:01.921159 containerd[1518]: time="2025-11-06T00:23:01.920602698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.059800937s" Nov 6 00:23:01.921159 containerd[1518]: time="2025-11-06T00:23:01.920648903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:23:01.926467 containerd[1518]: time="2025-11-06T00:23:01.926429938Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:23:01.939989 containerd[1518]: time="2025-11-06T00:23:01.939908906Z" level=info msg="Container 869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:01.957839 containerd[1518]: time="2025-11-06T00:23:01.957787559Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\"" Nov 6 00:23:01.959062 containerd[1518]: time="2025-11-06T00:23:01.959000397Z" level=info msg="StartContainer for \"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\"" Nov 6 00:23:01.963439 containerd[1518]: time="2025-11-06T00:23:01.963111486Z" level=info msg="connecting to shim 869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf" address="unix:///run/containerd/s/ce063ad080255ae27f12f630ebce9097a237ee79c9ded9455b68061049ec03e9" protocol=ttrpc version=3 Nov 6 00:23:02.002007 systemd[1]: Started cri-containerd-869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf.scope - libcontainer container 869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf. Nov 6 00:23:02.065691 containerd[1518]: time="2025-11-06T00:23:02.065622401Z" level=info msg="StartContainer for \"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\" returns successfully" Nov 6 00:23:02.089580 systemd[1]: cri-containerd-869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf.scope: Deactivated successfully. Nov 6 00:23:02.095867 containerd[1518]: time="2025-11-06T00:23:02.095801486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\" id:\"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\" pid:3472 exited_at:{seconds:1762388582 nanos:95174810}" Nov 6 00:23:02.095867 containerd[1518]: time="2025-11-06T00:23:02.095820799Z" level=info msg="received exit event container_id:\"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\" id:\"869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf\" pid:3472 exited_at:{seconds:1762388582 nanos:95174810}" Nov 6 00:23:02.125026 kubelet[2773]: E1106 00:23:02.124812 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:02.135664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-869f1782d9adf30bd194ea11970c1749d9f051601a72632c3b27d5fa7c14dfdf-rootfs.mount: Deactivated successfully. Nov 6 00:23:02.260802 kubelet[2773]: I1106 00:23:02.260634 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:23:04.126872 kubelet[2773]: E1106 00:23:04.126807 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:04.270687 containerd[1518]: time="2025-11-06T00:23:04.270620656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:23:06.123934 kubelet[2773]: E1106 00:23:06.123866 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:07.421910 containerd[1518]: time="2025-11-06T00:23:07.421850247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.423338 containerd[1518]: time="2025-11-06T00:23:07.423085447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:23:07.424847 containerd[1518]: time="2025-11-06T00:23:07.424798632Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.428556 containerd[1518]: time="2025-11-06T00:23:07.427599019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.428556 containerd[1518]: time="2025-11-06T00:23:07.428428730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.157717541s" Nov 6 00:23:07.428556 containerd[1518]: time="2025-11-06T00:23:07.428465166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:23:07.435106 containerd[1518]: time="2025-11-06T00:23:07.435062152Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:23:07.448047 containerd[1518]: time="2025-11-06T00:23:07.448010989Z" level=info msg="Container ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:07.459237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256590504.mount: Deactivated successfully. Nov 6 00:23:07.465696 containerd[1518]: time="2025-11-06T00:23:07.465652211Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\"" Nov 6 00:23:07.466721 containerd[1518]: time="2025-11-06T00:23:07.466562297Z" level=info msg="StartContainer for \"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\"" Nov 6 00:23:07.469072 containerd[1518]: time="2025-11-06T00:23:07.469023329Z" level=info msg="connecting to shim ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed" address="unix:///run/containerd/s/ce063ad080255ae27f12f630ebce9097a237ee79c9ded9455b68061049ec03e9" protocol=ttrpc version=3 Nov 6 00:23:07.506996 systemd[1]: Started cri-containerd-ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed.scope - libcontainer container ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed. Nov 6 00:23:07.569733 containerd[1518]: time="2025-11-06T00:23:07.569693671Z" level=info msg="StartContainer for \"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\" returns successfully" Nov 6 00:23:08.126518 kubelet[2773]: E1106 00:23:08.126452 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:08.659863 containerd[1518]: time="2025-11-06T00:23:08.659788057Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:23:08.662629 systemd[1]: cri-containerd-ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed.scope: Deactivated successfully. Nov 6 00:23:08.664116 systemd[1]: cri-containerd-ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed.scope: Consumed 639ms CPU time, 191.9M memory peak, 171.3M written to disk. Nov 6 00:23:08.667720 containerd[1518]: time="2025-11-06T00:23:08.667652969Z" level=info msg="received exit event container_id:\"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\" id:\"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\" pid:3531 exited_at:{seconds:1762388588 nanos:667396929}" Nov 6 00:23:08.668087 containerd[1518]: time="2025-11-06T00:23:08.668029205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\" id:\"ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed\" pid:3531 exited_at:{seconds:1762388588 nanos:667396929}" Nov 6 00:23:08.698396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed43eb3df9be863c0b75890bea8350808498bf1845cfdbd5ba216bbf16a1f9ed-rootfs.mount: Deactivated successfully. Nov 6 00:23:08.724156 kubelet[2773]: I1106 00:23:08.724047 2773 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:23:09.110371 kubelet[2773]: I1106 00:23:09.027274 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b055a3cb-6725-4c38-a9df-541d3ef5e7bb-config-volume\") pod \"coredns-674b8bbfcf-v226n\" (UID: \"b055a3cb-6725-4c38-a9df-541d3ef5e7bb\") " pod="kube-system/coredns-674b8bbfcf-v226n" Nov 6 00:23:09.110371 kubelet[2773]: I1106 00:23:09.027344 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2d5z\" (UniqueName: \"kubernetes.io/projected/b055a3cb-6725-4c38-a9df-541d3ef5e7bb-kube-api-access-s2d5z\") pod \"coredns-674b8bbfcf-v226n\" (UID: \"b055a3cb-6725-4c38-a9df-541d3ef5e7bb\") " pod="kube-system/coredns-674b8bbfcf-v226n" Nov 6 00:23:09.128054 kubelet[2773]: I1106 00:23:09.128014 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/277f2c19-e4a9-4f03-8480-9bd1e1253861-config-volume\") pod \"coredns-674b8bbfcf-h5nc2\" (UID: \"277f2c19-e4a9-4f03-8480-9bd1e1253861\") " pod="kube-system/coredns-674b8bbfcf-h5nc2" Nov 6 00:23:09.129670 kubelet[2773]: I1106 00:23:09.128087 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjsk\" (UniqueName: \"kubernetes.io/projected/277f2c19-e4a9-4f03-8480-9bd1e1253861-kube-api-access-szjsk\") pod \"coredns-674b8bbfcf-h5nc2\" (UID: \"277f2c19-e4a9-4f03-8480-9bd1e1253861\") " pod="kube-system/coredns-674b8bbfcf-h5nc2" Nov 6 00:23:09.134417 systemd[1]: Created slice kubepods-burstable-podb055a3cb_6725_4c38_a9df_541d3ef5e7bb.slice - libcontainer container kubepods-burstable-podb055a3cb_6725_4c38_a9df_541d3ef5e7bb.slice. Nov 6 00:23:09.149613 systemd[1]: Created slice kubepods-burstable-pod277f2c19_e4a9_4f03_8480_9bd1e1253861.slice - libcontainer container kubepods-burstable-pod277f2c19_e4a9_4f03_8480_9bd1e1253861.slice. Nov 6 00:23:09.304843 systemd[1]: Created slice kubepods-besteffort-podd2b7456c_b14f_487f_b6a5_068ef90c8b4d.slice - libcontainer container kubepods-besteffort-podd2b7456c_b14f_487f_b6a5_068ef90c8b4d.slice. Nov 6 00:23:09.318054 containerd[1518]: time="2025-11-06T00:23:09.317988019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:23:09.330426 kubelet[2773]: I1106 00:23:09.329929 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7fe53f63-7b33-45ac-b5f4-f8e84eb05683-goldmane-key-pair\") pod \"goldmane-666569f655-65l6k\" (UID: \"7fe53f63-7b33-45ac-b5f4-f8e84eb05683\") " pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.330598 kubelet[2773]: I1106 00:23:09.330463 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk856\" (UniqueName: \"kubernetes.io/projected/7fe53f63-7b33-45ac-b5f4-f8e84eb05683-kube-api-access-fk856\") pod \"goldmane-666569f655-65l6k\" (UID: \"7fe53f63-7b33-45ac-b5f4-f8e84eb05683\") " pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.330598 kubelet[2773]: I1106 00:23:09.330495 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-ca-bundle\") pod \"whisker-f79c5f5f8-xwkkl\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " pod="calico-system/whisker-f79c5f5f8-xwkkl" Nov 6 00:23:09.330598 kubelet[2773]: I1106 00:23:09.330529 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe53f63-7b33-45ac-b5f4-f8e84eb05683-config\") pod \"goldmane-666569f655-65l6k\" (UID: \"7fe53f63-7b33-45ac-b5f4-f8e84eb05683\") " pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.330598 kubelet[2773]: I1106 00:23:09.330558 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0495b935-824c-48f0-99f7-45ec9b94fbf9-calico-apiserver-certs\") pod \"calico-apiserver-74b877ccb8-8j9rh\" (UID: \"0495b935-824c-48f0-99f7-45ec9b94fbf9\") " pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" Nov 6 00:23:09.332072 kubelet[2773]: I1106 00:23:09.330606 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2b7456c-b14f-487f-b6a5-068ef90c8b4d-calico-apiserver-certs\") pod \"calico-apiserver-74b877ccb8-v7bgg\" (UID: \"d2b7456c-b14f-487f-b6a5-068ef90c8b4d\") " pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" Nov 6 00:23:09.332072 kubelet[2773]: I1106 00:23:09.330638 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjxv\" (UniqueName: \"kubernetes.io/projected/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-kube-api-access-ffjxv\") pod \"whisker-f79c5f5f8-xwkkl\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " pod="calico-system/whisker-f79c5f5f8-xwkkl" Nov 6 00:23:09.332072 kubelet[2773]: I1106 00:23:09.330672 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d385245-7d9d-431f-b9ed-020a695bf7cd-tigera-ca-bundle\") pod \"calico-kube-controllers-5486c85ff6-xm98d\" (UID: \"5d385245-7d9d-431f-b9ed-020a695bf7cd\") " pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" Nov 6 00:23:09.332072 kubelet[2773]: I1106 00:23:09.330700 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrtx\" (UniqueName: \"kubernetes.io/projected/5d385245-7d9d-431f-b9ed-020a695bf7cd-kube-api-access-jwrtx\") pod \"calico-kube-controllers-5486c85ff6-xm98d\" (UID: \"5d385245-7d9d-431f-b9ed-020a695bf7cd\") " pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" Nov 6 00:23:09.332847 systemd[1]: Created slice kubepods-besteffort-pod5d385245_7d9d_431f_b9ed_020a695bf7cd.slice - libcontainer container kubepods-besteffort-pod5d385245_7d9d_431f_b9ed_020a695bf7cd.slice. Nov 6 00:23:09.333897 kubelet[2773]: I1106 00:23:09.333859 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-backend-key-pair\") pod \"whisker-f79c5f5f8-xwkkl\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " pod="calico-system/whisker-f79c5f5f8-xwkkl" Nov 6 00:23:09.334028 kubelet[2773]: I1106 00:23:09.333946 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fe53f63-7b33-45ac-b5f4-f8e84eb05683-goldmane-ca-bundle\") pod \"goldmane-666569f655-65l6k\" (UID: \"7fe53f63-7b33-45ac-b5f4-f8e84eb05683\") " pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.334028 kubelet[2773]: I1106 00:23:09.333981 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrx9\" (UniqueName: \"kubernetes.io/projected/0495b935-824c-48f0-99f7-45ec9b94fbf9-kube-api-access-5mrx9\") pod \"calico-apiserver-74b877ccb8-8j9rh\" (UID: \"0495b935-824c-48f0-99f7-45ec9b94fbf9\") " pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" Nov 6 00:23:09.334223 kubelet[2773]: I1106 00:23:09.334036 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhl7\" (UniqueName: \"kubernetes.io/projected/d2b7456c-b14f-487f-b6a5-068ef90c8b4d-kube-api-access-9mhl7\") pod \"calico-apiserver-74b877ccb8-v7bgg\" (UID: \"d2b7456c-b14f-487f-b6a5-068ef90c8b4d\") " pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" Nov 6 00:23:09.347796 systemd[1]: Created slice kubepods-besteffort-pod7fe53f63_7b33_45ac_b5f4_f8e84eb05683.slice - libcontainer container kubepods-besteffort-pod7fe53f63_7b33_45ac_b5f4_f8e84eb05683.slice. Nov 6 00:23:09.363778 systemd[1]: Created slice kubepods-besteffort-pod0495b935_824c_48f0_99f7_45ec9b94fbf9.slice - libcontainer container kubepods-besteffort-pod0495b935_824c_48f0_99f7_45ec9b94fbf9.slice. Nov 6 00:23:09.375587 systemd[1]: Created slice kubepods-besteffort-pod8529e5cd_a84e_4052_a7e4_dd7c3f109d44.slice - libcontainer container kubepods-besteffort-pod8529e5cd_a84e_4052_a7e4_dd7c3f109d44.slice. Nov 6 00:23:09.451442 containerd[1518]: time="2025-11-06T00:23:09.451060710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v226n,Uid:b055a3cb-6725-4c38-a9df-541d3ef5e7bb,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:09.460901 containerd[1518]: time="2025-11-06T00:23:09.460010182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5nc2,Uid:277f2c19-e4a9-4f03-8480-9bd1e1253861,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:09.615463 containerd[1518]: time="2025-11-06T00:23:09.614961937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-v7bgg,Uid:d2b7456c-b14f-487f-b6a5-068ef90c8b4d,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:09.643930 containerd[1518]: time="2025-11-06T00:23:09.643870114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5486c85ff6-xm98d,Uid:5d385245-7d9d-431f-b9ed-020a695bf7cd,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:09.646545 containerd[1518]: time="2025-11-06T00:23:09.646494949Z" level=error msg="Failed to destroy network for sandbox \"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.653254 containerd[1518]: time="2025-11-06T00:23:09.653129813Z" level=error msg="Failed to destroy network for sandbox \"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.654620 containerd[1518]: time="2025-11-06T00:23:09.654544295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5nc2,Uid:277f2c19-e4a9-4f03-8480-9bd1e1253861,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.655733 kubelet[2773]: E1106 00:23:09.655013 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.655733 kubelet[2773]: E1106 00:23:09.655117 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h5nc2" Nov 6 00:23:09.655733 kubelet[2773]: E1106 00:23:09.655173 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h5nc2" Nov 6 00:23:09.655981 kubelet[2773]: E1106 00:23:09.655255 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h5nc2_kube-system(277f2c19-e4a9-4f03-8480-9bd1e1253861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h5nc2_kube-system(277f2c19-e4a9-4f03-8480-9bd1e1253861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e3e317192619b05b56e9666d7aaffe6b250e0c63812e0570d3068c9a18b81df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h5nc2" podUID="277f2c19-e4a9-4f03-8480-9bd1e1253861" Nov 6 00:23:09.658418 containerd[1518]: time="2025-11-06T00:23:09.658163324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v226n,Uid:b055a3cb-6725-4c38-a9df-541d3ef5e7bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.659694 containerd[1518]: time="2025-11-06T00:23:09.659645223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65l6k,Uid:7fe53f63-7b33-45ac-b5f4-f8e84eb05683,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:09.660686 kubelet[2773]: E1106 00:23:09.660418 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.660686 kubelet[2773]: E1106 00:23:09.660522 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v226n" Nov 6 00:23:09.660686 kubelet[2773]: E1106 00:23:09.660553 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v226n" Nov 6 00:23:09.661645 kubelet[2773]: E1106 00:23:09.661143 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v226n_kube-system(b055a3cb-6725-4c38-a9df-541d3ef5e7bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v226n_kube-system(b055a3cb-6725-4c38-a9df-541d3ef5e7bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd15cb57cb706525db8ce337c335de5e340a07c3dc7b610c64ae78531b633311\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v226n" podUID="b055a3cb-6725-4c38-a9df-541d3ef5e7bb" Nov 6 00:23:09.673782 containerd[1518]: time="2025-11-06T00:23:09.673150417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-8j9rh,Uid:0495b935-824c-48f0-99f7-45ec9b94fbf9,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:09.680313 containerd[1518]: time="2025-11-06T00:23:09.680272107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f79c5f5f8-xwkkl,Uid:8529e5cd-a84e-4052-a7e4-dd7c3f109d44,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:09.747273 systemd[1]: run-netns-cni\x2d8beb6932\x2d1816\x2d0164\x2db314\x2d098d8bb6cf1e.mount: Deactivated successfully. Nov 6 00:23:09.747435 systemd[1]: run-netns-cni\x2dec85c723\x2d0068\x2d5361\x2d4ef0\x2d620577f8f582.mount: Deactivated successfully. Nov 6 00:23:09.891834 containerd[1518]: time="2025-11-06T00:23:09.889980769Z" level=error msg="Failed to destroy network for sandbox \"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.902473 systemd[1]: run-netns-cni\x2d46f2c253\x2d3cfb\x2de9f8\x2d8f1a\x2d1e4c60a17842.mount: Deactivated successfully. Nov 6 00:23:09.903636 containerd[1518]: time="2025-11-06T00:23:09.903378728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65l6k,Uid:7fe53f63-7b33-45ac-b5f4-f8e84eb05683,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.904238 kubelet[2773]: E1106 00:23:09.904056 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.904238 kubelet[2773]: E1106 00:23:09.904165 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.904238 kubelet[2773]: E1106 00:23:09.904200 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-65l6k" Nov 6 00:23:09.904931 kubelet[2773]: E1106 00:23:09.904725 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-65l6k_calico-system(7fe53f63-7b33-45ac-b5f4-f8e84eb05683)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-65l6k_calico-system(7fe53f63-7b33-45ac-b5f4-f8e84eb05683)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8409a73f718d080f068a7da1975a77fd38ff131e408413c984a9dd69808e78c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:09.932157 containerd[1518]: time="2025-11-06T00:23:09.931983100Z" level=error msg="Failed to destroy network for sandbox \"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.934569 containerd[1518]: time="2025-11-06T00:23:09.934514704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-v7bgg,Uid:d2b7456c-b14f-487f-b6a5-068ef90c8b4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.935912 kubelet[2773]: E1106 00:23:09.935555 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.935912 kubelet[2773]: E1106 00:23:09.935624 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" Nov 6 00:23:09.935912 kubelet[2773]: E1106 00:23:09.935656 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" Nov 6 00:23:09.938218 kubelet[2773]: E1106 00:23:09.935725 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b877ccb8-v7bgg_calico-apiserver(d2b7456c-b14f-487f-b6a5-068ef90c8b4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b877ccb8-v7bgg_calico-apiserver(d2b7456c-b14f-487f-b6a5-068ef90c8b4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebeeb6b190ed1f45d8d55e6a24d25c336525f16a6d4803857af4ec13252e9ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:09.944647 systemd[1]: run-netns-cni\x2d10f8d6a4\x2d32e0\x2d0b4d\x2d7508\x2d68229721afe4.mount: Deactivated successfully. Nov 6 00:23:09.952154 containerd[1518]: time="2025-11-06T00:23:09.952104254Z" level=error msg="Failed to destroy network for sandbox \"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.956476 containerd[1518]: time="2025-11-06T00:23:09.956260274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5486c85ff6-xm98d,Uid:5d385245-7d9d-431f-b9ed-020a695bf7cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.958037 kubelet[2773]: E1106 00:23:09.957549 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.958037 kubelet[2773]: E1106 00:23:09.957659 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" Nov 6 00:23:09.958037 kubelet[2773]: E1106 00:23:09.957723 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" Nov 6 00:23:09.958398 kubelet[2773]: E1106 00:23:09.957965 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5486c85ff6-xm98d_calico-system(5d385245-7d9d-431f-b9ed-020a695bf7cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5486c85ff6-xm98d_calico-system(5d385245-7d9d-431f-b9ed-020a695bf7cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd87fc5c118da6552f9f2e2025eff22dd35f880dbd8014733880dd064b8b2536\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:09.961133 systemd[1]: run-netns-cni\x2dd321b756\x2d8cc4\x2d4a28\x2d082f\x2d923d91551443.mount: Deactivated successfully. Nov 6 00:23:09.969719 containerd[1518]: time="2025-11-06T00:23:09.969588228Z" level=error msg="Failed to destroy network for sandbox \"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.977775 containerd[1518]: time="2025-11-06T00:23:09.975300680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f79c5f5f8-xwkkl,Uid:8529e5cd-a84e-4052-a7e4-dd7c3f109d44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.978317 kubelet[2773]: E1106 00:23:09.978258 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.979802 kubelet[2773]: E1106 00:23:09.978438 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f79c5f5f8-xwkkl" Nov 6 00:23:09.979802 kubelet[2773]: E1106 00:23:09.978479 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f79c5f5f8-xwkkl" Nov 6 00:23:09.979802 kubelet[2773]: E1106 00:23:09.978557 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f79c5f5f8-xwkkl_calico-system(8529e5cd-a84e-4052-a7e4-dd7c3f109d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f79c5f5f8-xwkkl_calico-system(8529e5cd-a84e-4052-a7e4-dd7c3f109d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"559136f42c244c8f399e2a0efaca5b9e10acc67989a5a07e37b866785395a434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f79c5f5f8-xwkkl" podUID="8529e5cd-a84e-4052-a7e4-dd7c3f109d44" Nov 6 00:23:09.979173 systemd[1]: run-netns-cni\x2d14d6e26d\x2dd83a\x2da9ea\x2d708d\x2df5ba9ef57550.mount: Deactivated successfully. Nov 6 00:23:09.996064 containerd[1518]: time="2025-11-06T00:23:09.995995945Z" level=error msg="Failed to destroy network for sandbox \"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.997913 containerd[1518]: time="2025-11-06T00:23:09.997845844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-8j9rh,Uid:0495b935-824c-48f0-99f7-45ec9b94fbf9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.998274 kubelet[2773]: E1106 00:23:09.998214 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:09.998405 kubelet[2773]: E1106 00:23:09.998288 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" Nov 6 00:23:09.998748 kubelet[2773]: E1106 00:23:09.998686 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" Nov 6 00:23:09.999972 kubelet[2773]: E1106 00:23:09.999897 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b877ccb8-8j9rh_calico-apiserver(0495b935-824c-48f0-99f7-45ec9b94fbf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b877ccb8-8j9rh_calico-apiserver(0495b935-824c-48f0-99f7-45ec9b94fbf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f2f90bd1e1700b7ded758f39ac4cae0f3fe95930d914565971d781efde1068d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:10.133703 systemd[1]: Created slice kubepods-besteffort-pod70152e9b_de49_41f1_96dc_b8cd479787b2.slice - libcontainer container kubepods-besteffort-pod70152e9b_de49_41f1_96dc_b8cd479787b2.slice. Nov 6 00:23:10.137869 containerd[1518]: time="2025-11-06T00:23:10.137480691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgmkz,Uid:70152e9b-de49-41f1-96dc-b8cd479787b2,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:10.240010 containerd[1518]: time="2025-11-06T00:23:10.239801292Z" level=error msg="Failed to destroy network for sandbox \"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:10.242802 containerd[1518]: time="2025-11-06T00:23:10.241902011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgmkz,Uid:70152e9b-de49-41f1-96dc-b8cd479787b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:10.243431 kubelet[2773]: E1106 00:23:10.243380 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:10.244319 kubelet[2773]: E1106 00:23:10.243633 2773 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:23:10.244594 kubelet[2773]: E1106 00:23:10.244334 2773 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgmkz" Nov 6 00:23:10.245614 kubelet[2773]: E1106 00:23:10.245549 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9265729c51a184c367e3300979610ed8fc8910b44d35cc9eed957333410c6fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:10.700162 systemd[1]: run-netns-cni\x2d738df6fd\x2d1b4b\x2dbf7f\x2db9d5\x2ddb61bd183575.mount: Deactivated successfully. Nov 6 00:23:16.310026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559267920.mount: Deactivated successfully. Nov 6 00:23:16.339807 containerd[1518]: time="2025-11-06T00:23:16.339456503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:16.341043 containerd[1518]: time="2025-11-06T00:23:16.340978649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:23:16.342980 containerd[1518]: time="2025-11-06T00:23:16.342919494Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:16.345852 containerd[1518]: time="2025-11-06T00:23:16.345783163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:16.346786 containerd[1518]: time="2025-11-06T00:23:16.346732947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.028468325s" Nov 6 00:23:16.347643 containerd[1518]: time="2025-11-06T00:23:16.346789792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:23:16.377116 containerd[1518]: time="2025-11-06T00:23:16.377072970Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:23:16.389950 containerd[1518]: time="2025-11-06T00:23:16.389904575Z" level=info msg="Container c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:16.404280 containerd[1518]: time="2025-11-06T00:23:16.404226312Z" level=info msg="CreateContainer within sandbox \"df12fb6455e73e2b27bf261d284f123a66b32e0bad78aa2bd7f801a0621bfb20\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\"" Nov 6 00:23:16.405791 containerd[1518]: time="2025-11-06T00:23:16.404990962Z" level=info msg="StartContainer for \"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\"" Nov 6 00:23:16.407998 containerd[1518]: time="2025-11-06T00:23:16.407953596Z" level=info msg="connecting to shim c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d" address="unix:///run/containerd/s/ce063ad080255ae27f12f630ebce9097a237ee79c9ded9455b68061049ec03e9" protocol=ttrpc version=3 Nov 6 00:23:16.441941 systemd[1]: Started cri-containerd-c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d.scope - libcontainer container c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d. Nov 6 00:23:16.513786 containerd[1518]: time="2025-11-06T00:23:16.512232044Z" level=info msg="StartContainer for \"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\" returns successfully" Nov 6 00:23:16.629649 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:23:16.629841 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:23:16.892516 kubelet[2773]: I1106 00:23:16.891882 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-backend-key-pair\") pod \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " Nov 6 00:23:16.892516 kubelet[2773]: I1106 00:23:16.891946 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-ca-bundle\") pod \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " Nov 6 00:23:16.892516 kubelet[2773]: I1106 00:23:16.891988 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjxv\" (UniqueName: \"kubernetes.io/projected/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-kube-api-access-ffjxv\") pod \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\" (UID: \"8529e5cd-a84e-4052-a7e4-dd7c3f109d44\") " Nov 6 00:23:16.896101 kubelet[2773]: I1106 00:23:16.896030 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8529e5cd-a84e-4052-a7e4-dd7c3f109d44" (UID: "8529e5cd-a84e-4052-a7e4-dd7c3f109d44"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:23:16.900900 kubelet[2773]: I1106 00:23:16.900668 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-kube-api-access-ffjxv" (OuterVolumeSpecName: "kube-api-access-ffjxv") pod "8529e5cd-a84e-4052-a7e4-dd7c3f109d44" (UID: "8529e5cd-a84e-4052-a7e4-dd7c3f109d44"). InnerVolumeSpecName "kube-api-access-ffjxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:23:16.901900 kubelet[2773]: I1106 00:23:16.901863 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8529e5cd-a84e-4052-a7e4-dd7c3f109d44" (UID: "8529e5cd-a84e-4052-a7e4-dd7c3f109d44"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:23:16.993087 kubelet[2773]: I1106 00:23:16.993038 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-ca-bundle\") on node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" DevicePath \"\"" Nov 6 00:23:16.993087 kubelet[2773]: I1106 00:23:16.993082 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffjxv\" (UniqueName: \"kubernetes.io/projected/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-kube-api-access-ffjxv\") on node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" DevicePath \"\"" Nov 6 00:23:16.993087 kubelet[2773]: I1106 00:23:16.993100 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8529e5cd-a84e-4052-a7e4-dd7c3f109d44-whisker-backend-key-pair\") on node \"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e\" DevicePath \"\"" Nov 6 00:23:17.309439 systemd[1]: var-lib-kubelet-pods-8529e5cd\x2da84e\x2d4052\x2da7e4\x2ddd7c3f109d44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffjxv.mount: Deactivated successfully. Nov 6 00:23:17.309622 systemd[1]: var-lib-kubelet-pods-8529e5cd\x2da84e\x2d4052\x2da7e4\x2ddd7c3f109d44-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:23:17.371918 systemd[1]: Removed slice kubepods-besteffort-pod8529e5cd_a84e_4052_a7e4_dd7c3f109d44.slice - libcontainer container kubepods-besteffort-pod8529e5cd_a84e_4052_a7e4_dd7c3f109d44.slice. Nov 6 00:23:17.405166 kubelet[2773]: I1106 00:23:17.404549 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rpdt6" podStartSLOduration=2.054694086 podStartE2EDuration="19.404527103s" podCreationTimestamp="2025-11-06 00:22:58 +0000 UTC" firstStartedPulling="2025-11-06 00:22:58.998714695 +0000 UTC m=+25.114893605" lastFinishedPulling="2025-11-06 00:23:16.348547703 +0000 UTC m=+42.464726622" observedRunningTime="2025-11-06 00:23:17.400008756 +0000 UTC m=+43.516187682" watchObservedRunningTime="2025-11-06 00:23:17.404527103 +0000 UTC m=+43.520706030" Nov 6 00:23:17.491811 systemd[1]: Created slice kubepods-besteffort-pod927ec230_fe67_4f72_91a2_11014246002e.slice - libcontainer container kubepods-besteffort-pod927ec230_fe67_4f72_91a2_11014246002e.slice. Nov 6 00:23:17.497591 kubelet[2773]: I1106 00:23:17.497541 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927ec230-fe67-4f72-91a2-11014246002e-whisker-ca-bundle\") pod \"whisker-f67c8cdb4-zm6bl\" (UID: \"927ec230-fe67-4f72-91a2-11014246002e\") " pod="calico-system/whisker-f67c8cdb4-zm6bl" Nov 6 00:23:17.497786 kubelet[2773]: I1106 00:23:17.497615 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/927ec230-fe67-4f72-91a2-11014246002e-whisker-backend-key-pair\") pod \"whisker-f67c8cdb4-zm6bl\" (UID: \"927ec230-fe67-4f72-91a2-11014246002e\") " pod="calico-system/whisker-f67c8cdb4-zm6bl" Nov 6 00:23:17.497786 kubelet[2773]: I1106 00:23:17.497651 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgxq\" (UniqueName: \"kubernetes.io/projected/927ec230-fe67-4f72-91a2-11014246002e-kube-api-access-kdgxq\") pod \"whisker-f67c8cdb4-zm6bl\" (UID: \"927ec230-fe67-4f72-91a2-11014246002e\") " pod="calico-system/whisker-f67c8cdb4-zm6bl" Nov 6 00:23:17.668502 containerd[1518]: time="2025-11-06T00:23:17.668223886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\" id:\"0b69e046e0eadf6e62c609aa78515972793da614e1e2eeb4fcee4335ea659ac8\" pid:3961 exit_status:1 exited_at:{seconds:1762388597 nanos:667692471}" Nov 6 00:23:17.797696 containerd[1518]: time="2025-11-06T00:23:17.797639555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f67c8cdb4-zm6bl,Uid:927ec230-fe67-4f72-91a2-11014246002e,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:17.812011 kubelet[2773]: I1106 00:23:17.811953 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:23:17.971669 systemd-networkd[1405]: cali8d82c68210b: Link UP Nov 6 00:23:17.973987 systemd-networkd[1405]: cali8d82c68210b: Gained carrier Nov 6 00:23:17.997128 containerd[1518]: 2025-11-06 00:23:17.858 [INFO][3987] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:17.997128 containerd[1518]: 2025-11-06 00:23:17.883 [INFO][3987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0 whisker-f67c8cdb4- calico-system 927ec230-fe67-4f72-91a2-11014246002e 888 0 2025-11-06 00:23:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f67c8cdb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e whisker-f67c8cdb4-zm6bl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8d82c68210b [] [] }} ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-" Nov 6 00:23:17.997128 containerd[1518]: 2025-11-06 00:23:17.883 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.997128 containerd[1518]: 2025-11-06 00:23:17.916 [INFO][4000] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" HandleID="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.916 [INFO][4000] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" HandleID="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"whisker-f67c8cdb4-zm6bl", "timestamp":"2025-11-06 00:23:17.916523939 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.916 [INFO][4000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.916 [INFO][4000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.916 [INFO][4000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.925 [INFO][4000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.932 [INFO][4000] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.936 [INFO][4000] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.997891 containerd[1518]: 2025-11-06 00:23:17.938 [INFO][4000] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.940 [INFO][4000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.940 [INFO][4000] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.942 [INFO][4000] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46 Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.946 [INFO][4000] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.954 [INFO][4000] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.1/26] block=192.168.16.0/26 handle="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.954 [INFO][4000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.1/26] handle="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.955 [INFO][4000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:17.998706 containerd[1518]: 2025-11-06 00:23:17.955 [INFO][4000] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.1/26] IPv6=[] ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" HandleID="k8s-pod-network.406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.999427 containerd[1518]: 2025-11-06 00:23:17.959 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0", GenerateName:"whisker-f67c8cdb4-", Namespace:"calico-system", SelfLink:"", UID:"927ec230-fe67-4f72-91a2-11014246002e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f67c8cdb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"whisker-f67c8cdb4-zm6bl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d82c68210b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:17.999626 containerd[1518]: 2025-11-06 00:23:17.960 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.1/32] ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.999626 containerd[1518]: 2025-11-06 00:23:17.960 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d82c68210b ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.999626 containerd[1518]: 2025-11-06 00:23:17.970 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:17.999897 containerd[1518]: 2025-11-06 00:23:17.970 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0", GenerateName:"whisker-f67c8cdb4-", Namespace:"calico-system", SelfLink:"", UID:"927ec230-fe67-4f72-91a2-11014246002e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f67c8cdb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46", Pod:"whisker-f67c8cdb4-zm6bl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d82c68210b", MAC:"66:66:3c:57:ed:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:18.000121 containerd[1518]: 2025-11-06 00:23:17.984 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" Namespace="calico-system" Pod="whisker-f67c8cdb4-zm6bl" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-whisker--f67c8cdb4--zm6bl-eth0" Nov 6 00:23:18.032503 containerd[1518]: time="2025-11-06T00:23:18.032447464Z" level=info msg="connecting to shim 406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46" address="unix:///run/containerd/s/d8f9f7b995586e65a66480cc3d456f05a1abb99724253c0187b1ce7b206292da" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:18.064006 systemd[1]: Started cri-containerd-406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46.scope - libcontainer container 406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46. Nov 6 00:23:18.128828 kubelet[2773]: I1106 00:23:18.128487 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8529e5cd-a84e-4052-a7e4-dd7c3f109d44" path="/var/lib/kubelet/pods/8529e5cd-a84e-4052-a7e4-dd7c3f109d44/volumes" Nov 6 00:23:18.137718 containerd[1518]: time="2025-11-06T00:23:18.137665694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f67c8cdb4-zm6bl,Uid:927ec230-fe67-4f72-91a2-11014246002e,Namespace:calico-system,Attempt:0,} returns sandbox id \"406c26539b2dd54fc63858fcaa268065e8f92f286967cdd4c206bc7bbdad0d46\"" Nov 6 00:23:18.140722 containerd[1518]: time="2025-11-06T00:23:18.140685802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:23:18.294651 containerd[1518]: time="2025-11-06T00:23:18.294581383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:18.296069 containerd[1518]: time="2025-11-06T00:23:18.296013460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:23:18.296236 containerd[1518]: time="2025-11-06T00:23:18.296124000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:23:18.296348 kubelet[2773]: E1106 00:23:18.296309 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:18.296463 kubelet[2773]: E1106 00:23:18.296373 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:18.296855 kubelet[2773]: E1106 00:23:18.296576 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d89eb6d345984e0dbfc52267d62dcbe4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:18.300078 containerd[1518]: time="2025-11-06T00:23:18.299801895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:23:18.474093 containerd[1518]: time="2025-11-06T00:23:18.474030084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:18.475452 containerd[1518]: time="2025-11-06T00:23:18.475394296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:23:18.475795 containerd[1518]: time="2025-11-06T00:23:18.475513458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:18.475885 kubelet[2773]: E1106 00:23:18.475733 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:18.475885 kubelet[2773]: E1106 00:23:18.475810 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:18.476283 kubelet[2773]: E1106 00:23:18.476006 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:18.477524 kubelet[2773]: E1106 00:23:18.477467 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:23:18.617745 containerd[1518]: time="2025-11-06T00:23:18.617357377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\" id:\"2d87552d92ee6118c1ad0b53a3a345d5452c8377205b162e8f31f858263a55d0\" pid:4084 exit_status:1 exited_at:{seconds:1762388598 nanos:617015099}" Nov 6 00:23:19.114122 systemd-networkd[1405]: vxlan.calico: Link UP Nov 6 00:23:19.114135 systemd-networkd[1405]: vxlan.calico: Gained carrier Nov 6 00:23:19.294014 systemd-networkd[1405]: cali8d82c68210b: Gained IPv6LL Nov 6 00:23:19.368795 kubelet[2773]: E1106 00:23:19.367980 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:23:20.124537 containerd[1518]: time="2025-11-06T00:23:20.124413991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-v7bgg,Uid:d2b7456c-b14f-487f-b6a5-068ef90c8b4d,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:20.271141 systemd-networkd[1405]: cali4e33eba6811: Link UP Nov 6 00:23:20.272980 systemd-networkd[1405]: cali4e33eba6811: Gained carrier Nov 6 00:23:20.299893 containerd[1518]: 2025-11-06 00:23:20.182 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0 calico-apiserver-74b877ccb8- calico-apiserver d2b7456c-b14f-487f-b6a5-068ef90c8b4d 821 0 2025-11-06 00:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b877ccb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e calico-apiserver-74b877ccb8-v7bgg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e33eba6811 [] [] }} ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-" Nov 6 00:23:20.299893 containerd[1518]: 2025-11-06 00:23:20.182 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.299893 containerd[1518]: 2025-11-06 00:23:20.222 [INFO][4229] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" HandleID="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.223 [INFO][4229] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" HandleID="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"calico-apiserver-74b877ccb8-v7bgg", "timestamp":"2025-11-06 00:23:20.222799763 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.223 [INFO][4229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.223 [INFO][4229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.223 [INFO][4229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.233 [INFO][4229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.238 [INFO][4229] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.243 [INFO][4229] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.300643 containerd[1518]: 2025-11-06 00:23:20.245 [INFO][4229] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.247 [INFO][4229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.247 [INFO][4229] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.249 [INFO][4229] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4 Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.257 [INFO][4229] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.264 [INFO][4229] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.2/26] block=192.168.16.0/26 handle="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.264 [INFO][4229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.2/26] handle="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.264 [INFO][4229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:20.302532 containerd[1518]: 2025-11-06 00:23:20.264 [INFO][4229] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.2/26] IPv6=[] ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" HandleID="k8s-pod-network.5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.303063 containerd[1518]: 2025-11-06 00:23:20.267 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0", GenerateName:"calico-apiserver-74b877ccb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2b7456c-b14f-487f-b6a5-068ef90c8b4d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b877ccb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"calico-apiserver-74b877ccb8-v7bgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e33eba6811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:20.303196 containerd[1518]: 2025-11-06 00:23:20.267 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.2/32] ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.303196 containerd[1518]: 2025-11-06 00:23:20.267 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e33eba6811 ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.303196 containerd[1518]: 2025-11-06 00:23:20.274 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.303363 containerd[1518]: 2025-11-06 00:23:20.275 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0", GenerateName:"calico-apiserver-74b877ccb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2b7456c-b14f-487f-b6a5-068ef90c8b4d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b877ccb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4", Pod:"calico-apiserver-74b877ccb8-v7bgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e33eba6811", MAC:"9e:51:07:85:cb:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:20.303363 containerd[1518]: 2025-11-06 00:23:20.294 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-v7bgg" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--v7bgg-eth0" Nov 6 00:23:20.345996 containerd[1518]: time="2025-11-06T00:23:20.345930629Z" level=info msg="connecting to shim 5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4" address="unix:///run/containerd/s/fa37fed941d9f11756f103fb6b81d662a6c481f59f0e3178fc85118201971d8f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:20.394977 systemd[1]: Started cri-containerd-5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4.scope - libcontainer container 5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4. Nov 6 00:23:20.461418 containerd[1518]: time="2025-11-06T00:23:20.461366582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-v7bgg,Uid:d2b7456c-b14f-487f-b6a5-068ef90c8b4d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5c464ab53a73c3e398c68d3661b0650e05c0c004b44d12e8bfba98c8706110b4\"" Nov 6 00:23:20.469077 containerd[1518]: time="2025-11-06T00:23:20.469030547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:20.510074 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL Nov 6 00:23:20.625038 containerd[1518]: time="2025-11-06T00:23:20.624971461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:20.626577 containerd[1518]: time="2025-11-06T00:23:20.626507703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:20.626920 containerd[1518]: time="2025-11-06T00:23:20.626545522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:20.627000 kubelet[2773]: E1106 00:23:20.626877 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:20.627000 kubelet[2773]: E1106 00:23:20.626933 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:20.627548 kubelet[2773]: E1106 00:23:20.627124 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mhl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-v7bgg_calico-apiserver(d2b7456c-b14f-487f-b6a5-068ef90c8b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:20.628654 kubelet[2773]: E1106 00:23:20.628586 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:21.125145 containerd[1518]: time="2025-11-06T00:23:21.125070885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5486c85ff6-xm98d,Uid:5d385245-7d9d-431f-b9ed-020a695bf7cd,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:21.265555 systemd-networkd[1405]: caliae6c17a408a: Link UP Nov 6 00:23:21.265878 systemd-networkd[1405]: caliae6c17a408a: Gained carrier Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.179 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0 calico-kube-controllers-5486c85ff6- calico-system 5d385245-7d9d-431f-b9ed-020a695bf7cd 825 0 2025-11-06 00:22:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5486c85ff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e calico-kube-controllers-5486c85ff6-xm98d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae6c17a408a [] [] }} ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.179 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.216 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" HandleID="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.216 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" HandleID="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"calico-kube-controllers-5486c85ff6-xm98d", "timestamp":"2025-11-06 00:23:21.216582272 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.216 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.216 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.216 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.225 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.232 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.237 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.239 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.242 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.242 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.244 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.249 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.257 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.3/26] block=192.168.16.0/26 handle="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.257 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.3/26] handle="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.258 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:21.288840 containerd[1518]: 2025-11-06 00:23:21.258 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.3/26] IPv6=[] ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" HandleID="k8s-pod-network.10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.260 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0", GenerateName:"calico-kube-controllers-5486c85ff6-", Namespace:"calico-system", SelfLink:"", UID:"5d385245-7d9d-431f-b9ed-020a695bf7cd", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5486c85ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"calico-kube-controllers-5486c85ff6-xm98d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae6c17a408a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.261 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.3/32] ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.261 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae6c17a408a ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.264 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.266 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0", GenerateName:"calico-kube-controllers-5486c85ff6-", Namespace:"calico-system", SelfLink:"", UID:"5d385245-7d9d-431f-b9ed-020a695bf7cd", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5486c85ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b", Pod:"calico-kube-controllers-5486c85ff6-xm98d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae6c17a408a", MAC:"f6:1e:60:75:a3:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:21.292147 containerd[1518]: 2025-11-06 00:23:21.286 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" Namespace="calico-system" Pod="calico-kube-controllers-5486c85ff6-xm98d" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--kube--controllers--5486c85ff6--xm98d-eth0" Nov 6 00:23:21.337942 containerd[1518]: time="2025-11-06T00:23:21.337858679Z" level=info msg="connecting to shim 10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b" address="unix:///run/containerd/s/6ec84f4b33423286a5a7d81af28a8e6438ceacaf2d347d5bd5d82e19331ab079" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:21.390247 kubelet[2773]: E1106 00:23:21.389330 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:21.391968 systemd[1]: Started cri-containerd-10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b.scope - libcontainer container 10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b. Nov 6 00:23:21.481371 containerd[1518]: time="2025-11-06T00:23:21.481293467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5486c85ff6-xm98d,Uid:5d385245-7d9d-431f-b9ed-020a695bf7cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"10d114dd305a8ae66d5be38f98fb52723b5fb12c2235e642531d2ae37c6dc90b\"" Nov 6 00:23:21.483828 containerd[1518]: time="2025-11-06T00:23:21.483787989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:23:21.657171 containerd[1518]: time="2025-11-06T00:23:21.657011918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:21.658608 containerd[1518]: time="2025-11-06T00:23:21.658556156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:23:21.658819 containerd[1518]: time="2025-11-06T00:23:21.658670066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:21.659325 kubelet[2773]: E1106 00:23:21.658915 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:21.659325 kubelet[2773]: E1106 00:23:21.658971 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:21.659325 kubelet[2773]: E1106 00:23:21.659164 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwrtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5486c85ff6-xm98d_calico-system(5d385245-7d9d-431f-b9ed-020a695bf7cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:21.661233 kubelet[2773]: E1106 00:23:21.661085 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:22.045936 systemd-networkd[1405]: cali4e33eba6811: Gained IPv6LL Nov 6 00:23:22.125140 containerd[1518]: time="2025-11-06T00:23:22.125090053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5nc2,Uid:277f2c19-e4a9-4f03-8480-9bd1e1253861,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:22.125908 containerd[1518]: time="2025-11-06T00:23:22.125872003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgmkz,Uid:70152e9b-de49-41f1-96dc-b8cd479787b2,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:22.399000 systemd-networkd[1405]: cali4c83f3b4db4: Link UP Nov 6 00:23:22.400647 systemd-networkd[1405]: cali4c83f3b4db4: Gained carrier Nov 6 00:23:22.409976 kubelet[2773]: E1106 00:23:22.408228 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:22.410365 kubelet[2773]: E1106 00:23:22.410331 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.221 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0 coredns-674b8bbfcf- kube-system 277f2c19-e4a9-4f03-8480-9bd1e1253861 819 0 2025-11-06 00:22:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e coredns-674b8bbfcf-h5nc2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c83f3b4db4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.221 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.306 [INFO][4390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" HandleID="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.307 [INFO][4390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" HandleID="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"coredns-674b8bbfcf-h5nc2", "timestamp":"2025-11-06 00:23:22.306739926 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.307 [INFO][4390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.308 [INFO][4390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.308 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.330 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.340 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.347 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.350 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.353 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.353 [INFO][4390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.355 [INFO][4390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.359 [INFO][4390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.4/26] block=192.168.16.0/26 handle="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.4/26] handle="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:22.433398 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.4/26] IPv6=[] ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" HandleID="k8s-pod-network.6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.379 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"277f2c19-e4a9-4f03-8480-9bd1e1253861", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"coredns-674b8bbfcf-h5nc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c83f3b4db4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.380 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.4/32] ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.380 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c83f3b4db4 ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.407 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.411 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"277f2c19-e4a9-4f03-8480-9bd1e1253861", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b", Pod:"coredns-674b8bbfcf-h5nc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c83f3b4db4", MAC:"46:43:c3:e0:fb:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:22.436187 containerd[1518]: 2025-11-06 00:23:22.428 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5nc2" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--h5nc2-eth0" Nov 6 00:23:22.494211 containerd[1518]: time="2025-11-06T00:23:22.493914146Z" level=info msg="connecting to shim 6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b" address="unix:///run/containerd/s/ec53da0307b979a6665ef2dbff40d0e1857bcf75ee2efa0ea3196386c4c571d9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:22.550078 systemd[1]: Started cri-containerd-6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b.scope - libcontainer container 6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b. Nov 6 00:23:22.622080 systemd-networkd[1405]: caliae6c17a408a: Gained IPv6LL Nov 6 00:23:22.629516 systemd-networkd[1405]: cali2cbbaeda1b5: Link UP Nov 6 00:23:22.630935 systemd-networkd[1405]: cali2cbbaeda1b5: Gained carrier Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.269 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0 csi-node-driver- calico-system 70152e9b-de49-41f1-96dc-b8cd479787b2 750 0 2025-11-06 00:22:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e csi-node-driver-zgmkz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2cbbaeda1b5 [] [] }} ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.270 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.337 [INFO][4396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" HandleID="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.338 [INFO][4396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" HandleID="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"csi-node-driver-zgmkz", "timestamp":"2025-11-06 00:23:22.337152478 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.338 [INFO][4396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.371 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.446 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.531 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.563 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.568 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.575 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.575 [INFO][4396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.587 [INFO][4396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3 Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.596 [INFO][4396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.616 [INFO][4396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.5/26] block=192.168.16.0/26 handle="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.617 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.5/26] handle="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.617 [INFO][4396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:22.660735 containerd[1518]: 2025-11-06 00:23:22.617 [INFO][4396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.5/26] IPv6=[] ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" HandleID="k8s-pod-network.35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.621 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70152e9b-de49-41f1-96dc-b8cd479787b2", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"csi-node-driver-zgmkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cbbaeda1b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.621 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.5/32] ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.621 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cbbaeda1b5 ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.629 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.629 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70152e9b-de49-41f1-96dc-b8cd479787b2", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3", Pod:"csi-node-driver-zgmkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cbbaeda1b5", MAC:"7e:8b:b8:94:de:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:22.662603 containerd[1518]: 2025-11-06 00:23:22.655 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" Namespace="calico-system" Pod="csi-node-driver-zgmkz" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-csi--node--driver--zgmkz-eth0" Nov 6 00:23:22.713477 containerd[1518]: time="2025-11-06T00:23:22.713415983Z" level=info msg="connecting to shim 35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3" address="unix:///run/containerd/s/66a0050f1454572dfd270c364ae7e3397023cf0fd88f9f0246544d7d8ac977f0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:22.752149 systemd[1]: Started cri-containerd-35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3.scope - libcontainer container 35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3. Nov 6 00:23:22.823906 containerd[1518]: time="2025-11-06T00:23:22.823701939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgmkz,Uid:70152e9b-de49-41f1-96dc-b8cd479787b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"35d929409849c16c60947a6b111ff5c7c9d8ab4ac9676839f1008f7f7cc09da3\"" Nov 6 00:23:22.838128 containerd[1518]: time="2025-11-06T00:23:22.838073651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:23:22.859966 containerd[1518]: time="2025-11-06T00:23:22.859409730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5nc2,Uid:277f2c19-e4a9-4f03-8480-9bd1e1253861,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b\"" Nov 6 00:23:22.877835 containerd[1518]: time="2025-11-06T00:23:22.877778256Z" level=info msg="CreateContainer within sandbox \"6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:23:22.889783 containerd[1518]: time="2025-11-06T00:23:22.889450973Z" level=info msg="Container 24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:22.902120 containerd[1518]: time="2025-11-06T00:23:22.902065994Z" level=info msg="CreateContainer within sandbox \"6a9c497ddc715e9518c7fd81b9193f5c5b2c7051e6a0b3934f8a0a84da83669b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac\"" Nov 6 00:23:22.903705 containerd[1518]: time="2025-11-06T00:23:22.903060412Z" level=info msg="StartContainer for \"24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac\"" Nov 6 00:23:22.910127 containerd[1518]: time="2025-11-06T00:23:22.910036391Z" level=info msg="connecting to shim 24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac" address="unix:///run/containerd/s/ec53da0307b979a6665ef2dbff40d0e1857bcf75ee2efa0ea3196386c4c571d9" protocol=ttrpc version=3 Nov 6 00:23:22.961027 systemd[1]: Started cri-containerd-24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac.scope - libcontainer container 24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac. Nov 6 00:23:23.009130 containerd[1518]: time="2025-11-06T00:23:23.008931306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:23.010594 containerd[1518]: time="2025-11-06T00:23:23.010532705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:23:23.010894 containerd[1518]: time="2025-11-06T00:23:23.010658428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:23:23.011639 kubelet[2773]: E1106 00:23:23.011573 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:23.012161 kubelet[2773]: E1106 00:23:23.011663 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:23.013055 kubelet[2773]: E1106 00:23:23.012971 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:23.018240 containerd[1518]: time="2025-11-06T00:23:23.018181719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:23:23.033559 containerd[1518]: time="2025-11-06T00:23:23.033465444Z" level=info msg="StartContainer for \"24012b505f279000fc66177f938b0e7dc1f8ed74ba004f331eea42da28f125ac\" returns successfully" Nov 6 00:23:23.127040 containerd[1518]: time="2025-11-06T00:23:23.126027830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v226n,Uid:b055a3cb-6725-4c38-a9df-541d3ef5e7bb,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:23.128010 containerd[1518]: time="2025-11-06T00:23:23.127974194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65l6k,Uid:7fe53f63-7b33-45ac-b5f4-f8e84eb05683,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:23.204047 containerd[1518]: time="2025-11-06T00:23:23.202714298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:23.207790 containerd[1518]: time="2025-11-06T00:23:23.207096532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:23:23.208251 containerd[1518]: time="2025-11-06T00:23:23.208068392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:23:23.210986 kubelet[2773]: E1106 00:23:23.210933 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:23.211123 kubelet[2773]: E1106 00:23:23.210999 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:23.211348 kubelet[2773]: E1106 00:23:23.211165 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:23.212963 kubelet[2773]: E1106 00:23:23.212800 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:23.417552 kubelet[2773]: E1106 00:23:23.417423 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:23.425910 kubelet[2773]: E1106 00:23:23.425808 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:23.450262 systemd-networkd[1405]: cali385de09ca18: Link UP Nov 6 00:23:23.455607 systemd-networkd[1405]: cali385de09ca18: Gained carrier Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.282 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0 goldmane-666569f655- calico-system 7fe53f63-7b33-45ac-b5f4-f8e84eb05683 824 0 2025-11-06 00:22:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e goldmane-666569f655-65l6k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali385de09ca18 [] [] }} ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.283 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.354 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" HandleID="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.354 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" HandleID="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5ce0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"goldmane-666569f655-65l6k", "timestamp":"2025-11-06 00:23:23.354036745 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.355 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.355 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.355 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.370 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.381 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.391 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.394 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.398 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.398 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.400 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.410 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.435 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.6/26] block=192.168.16.0/26 handle="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.436 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.6/26] handle="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.436 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:23.505435 containerd[1518]: 2025-11-06 00:23:23.436 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.6/26] IPv6=[] ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" HandleID="k8s-pod-network.4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.442 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7fe53f63-7b33-45ac-b5f4-f8e84eb05683", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"goldmane-666569f655-65l6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali385de09ca18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.443 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.6/32] ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.443 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali385de09ca18 ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.459 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.463 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7fe53f63-7b33-45ac-b5f4-f8e84eb05683", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b", Pod:"goldmane-666569f655-65l6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali385de09ca18", MAC:"c6:f7:9f:46:43:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:23.507745 containerd[1518]: 2025-11-06 00:23:23.498 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" Namespace="calico-system" Pod="goldmane-666569f655-65l6k" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-goldmane--666569f655--65l6k-eth0" Nov 6 00:23:23.548772 kubelet[2773]: I1106 00:23:23.548675 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h5nc2" podStartSLOduration=44.548648906 podStartE2EDuration="44.548648906s" podCreationTimestamp="2025-11-06 00:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:23.548525591 +0000 UTC m=+49.664704533" watchObservedRunningTime="2025-11-06 00:23:23.548648906 +0000 UTC m=+49.664827832" Nov 6 00:23:23.577256 containerd[1518]: time="2025-11-06T00:23:23.575843031Z" level=info msg="connecting to shim 4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b" address="unix:///run/containerd/s/745fb5fd27384cc50c01b4575c1beee4ebb8a31cd281211d52960881d4529df4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:23.631528 systemd-networkd[1405]: calie1e42a174f2: Link UP Nov 6 00:23:23.634716 systemd-networkd[1405]: calie1e42a174f2: Gained carrier Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.282 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0 coredns-674b8bbfcf- kube-system b055a3cb-6725-4c38-a9df-541d3ef5e7bb 818 0 2025-11-06 00:22:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e coredns-674b8bbfcf-v226n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1e42a174f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.283 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.389 [INFO][4579] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" HandleID="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.390 [INFO][4579] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" HandleID="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"coredns-674b8bbfcf-v226n", "timestamp":"2025-11-06 00:23:23.389067743 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.390 [INFO][4579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.436 [INFO][4579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.436 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.492 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.536 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.545 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.548 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.552 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.552 [INFO][4579] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.555 [INFO][4579] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.567 [INFO][4579] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.598 [INFO][4579] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.7/26] block=192.168.16.0/26 handle="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.598 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.7/26] handle="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.598 [INFO][4579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:23.664619 containerd[1518]: 2025-11-06 00:23:23.598 [INFO][4579] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.7/26] IPv6=[] ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" HandleID="k8s-pod-network.b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.606 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b055a3cb-6725-4c38-a9df-541d3ef5e7bb", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"coredns-674b8bbfcf-v226n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1e42a174f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.606 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.7/32] ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.606 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1e42a174f2 ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.636 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.639 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b055a3cb-6725-4c38-a9df-541d3ef5e7bb", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de", Pod:"coredns-674b8bbfcf-v226n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1e42a174f2", MAC:"a6:62:72:67:35:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:23.671017 containerd[1518]: 2025-11-06 00:23:23.660 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" Namespace="kube-system" Pod="coredns-674b8bbfcf-v226n" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-coredns--674b8bbfcf--v226n-eth0" Nov 6 00:23:23.697022 systemd[1]: Started cri-containerd-4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b.scope - libcontainer container 4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b. Nov 6 00:23:23.734674 containerd[1518]: time="2025-11-06T00:23:23.734021791Z" level=info msg="connecting to shim b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de" address="unix:///run/containerd/s/69a7fc09306dbef1a5b25592db2fa067f350cf58a95f9071a4ecaee151fc7061" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:23.795054 systemd[1]: Started cri-containerd-b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de.scope - libcontainer container b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de. Nov 6 00:23:23.837918 systemd-networkd[1405]: cali4c83f3b4db4: Gained IPv6LL Nov 6 00:23:23.916278 containerd[1518]: time="2025-11-06T00:23:23.916216455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v226n,Uid:b055a3cb-6725-4c38-a9df-541d3ef5e7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de\"" Nov 6 00:23:23.931146 containerd[1518]: time="2025-11-06T00:23:23.930220679Z" level=info msg="CreateContainer within sandbox \"b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:23:23.949817 containerd[1518]: time="2025-11-06T00:23:23.949129516Z" level=info msg="Container 817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:23.951412 containerd[1518]: time="2025-11-06T00:23:23.951344429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65l6k,Uid:7fe53f63-7b33-45ac-b5f4-f8e84eb05683,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a7e2702c49cac621349bb6701f6110b41a68629a1a1ca1c05024681a18cca3b\"" Nov 6 00:23:23.954725 containerd[1518]: time="2025-11-06T00:23:23.953940692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:23:23.959937 containerd[1518]: time="2025-11-06T00:23:23.959895462Z" level=info msg="CreateContainer within sandbox \"b303aafce462a388c238e122fcec1370376e37bf9e64f207365a827584efa9de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0\"" Nov 6 00:23:23.960806 containerd[1518]: time="2025-11-06T00:23:23.960683019Z" level=info msg="StartContainer for \"817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0\"" Nov 6 00:23:23.964128 containerd[1518]: time="2025-11-06T00:23:23.964003410Z" level=info msg="connecting to shim 817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0" address="unix:///run/containerd/s/69a7fc09306dbef1a5b25592db2fa067f350cf58a95f9071a4ecaee151fc7061" protocol=ttrpc version=3 Nov 6 00:23:23.994000 systemd[1]: Started cri-containerd-817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0.scope - libcontainer container 817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0. Nov 6 00:23:24.056461 containerd[1518]: time="2025-11-06T00:23:24.056410356Z" level=info msg="StartContainer for \"817ae15f981ed9779ae7e5870ed209df9e4ec9225ebf3bf9feb4b5625df4a6c0\" returns successfully" Nov 6 00:23:24.115730 containerd[1518]: time="2025-11-06T00:23:24.115614121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:24.117490 containerd[1518]: time="2025-11-06T00:23:24.117394476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:23:24.117660 containerd[1518]: time="2025-11-06T00:23:24.117553505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:24.117895 kubelet[2773]: E1106 00:23:24.117831 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:24.118382 kubelet[2773]: E1106 00:23:24.117896 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:24.118382 kubelet[2773]: E1106 00:23:24.118111 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65l6k_calico-system(7fe53f63-7b33-45ac-b5f4-f8e84eb05683): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:24.119995 kubelet[2773]: E1106 00:23:24.119943 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:24.126359 containerd[1518]: time="2025-11-06T00:23:24.126300638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-8j9rh,Uid:0495b935-824c-48f0-99f7-45ec9b94fbf9,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:24.339444 systemd-networkd[1405]: calie3027b6af69: Link UP Nov 6 00:23:24.340102 systemd-networkd[1405]: calie3027b6af69: Gained carrier Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.210 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0 calico-apiserver-74b877ccb8- calico-apiserver 0495b935-824c-48f0-99f7-45ec9b94fbf9 827 0 2025-11-06 00:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b877ccb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e calico-apiserver-74b877ccb8-8j9rh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3027b6af69 [] [] }} ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.210 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.267 [INFO][4754] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" HandleID="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.267 [INFO][4754] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" HandleID="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fde0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", "pod":"calico-apiserver-74b877ccb8-8j9rh", "timestamp":"2025-11-06 00:23:24.267631136 +0000 UTC"}, Hostname:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.268 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.268 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.268 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e' Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.278 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.285 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.297 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.301 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.304 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.0/26 host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.304 [INFO][4754] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.0/26 handle="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.308 [INFO][4754] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.315 [INFO][4754] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.0/26 handle="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.327 [INFO][4754] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.8/26] block=192.168.16.0/26 handle="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.327 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.8/26] handle="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" host="ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e" Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.327 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:24.363714 containerd[1518]: 2025-11-06 00:23:24.327 [INFO][4754] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.8/26] IPv6=[] ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" HandleID="k8s-pod-network.d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Workload="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.330 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0", GenerateName:"calico-apiserver-74b877ccb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0495b935-824c-48f0-99f7-45ec9b94fbf9", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b877ccb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"", Pod:"calico-apiserver-74b877ccb8-8j9rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3027b6af69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.331 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.8/32] ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.332 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3027b6af69 ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.338 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.343 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0", GenerateName:"calico-apiserver-74b877ccb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0495b935-824c-48f0-99f7-45ec9b94fbf9", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b877ccb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-nightly-20251105-2100-9edbbb99009956a7ed1e", ContainerID:"d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a", Pod:"calico-apiserver-74b877ccb8-8j9rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3027b6af69", MAC:"a6:c8:de:ca:9f:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:24.366325 containerd[1518]: 2025-11-06 00:23:24.359 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" Namespace="calico-apiserver" Pod="calico-apiserver-74b877ccb8-8j9rh" WorkloadEndpoint="ci--4459--1--0--nightly--20251105--2100--9edbbb99009956a7ed1e-k8s-calico--apiserver--74b877ccb8--8j9rh-eth0" Nov 6 00:23:24.417093 containerd[1518]: time="2025-11-06T00:23:24.416836666Z" level=info msg="connecting to shim d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a" address="unix:///run/containerd/s/72225f17c6d7b26e2a46c3150c0e471b432794313599507b7244d672732198dd" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:24.449837 kubelet[2773]: E1106 00:23:24.448979 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:24.449837 kubelet[2773]: E1106 00:23:24.449205 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:24.521164 systemd[1]: Started cri-containerd-d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a.scope - libcontainer container d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a. Nov 6 00:23:24.535217 kubelet[2773]: I1106 00:23:24.535125 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v226n" podStartSLOduration=45.534896524 podStartE2EDuration="45.534896524s" podCreationTimestamp="2025-11-06 00:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:24.479540621 +0000 UTC m=+50.595719545" watchObservedRunningTime="2025-11-06 00:23:24.534896524 +0000 UTC m=+50.651075450" Nov 6 00:23:24.543296 systemd-networkd[1405]: cali2cbbaeda1b5: Gained IPv6LL Nov 6 00:23:24.722702 containerd[1518]: time="2025-11-06T00:23:24.722628758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b877ccb8-8j9rh,Uid:0495b935-824c-48f0-99f7-45ec9b94fbf9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d616679d63795c499f8db4a421556eb81d0d7d294f42a363f3d14c234c2e4a8a\"" Nov 6 00:23:24.725655 containerd[1518]: time="2025-11-06T00:23:24.725577395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:24.798273 systemd-networkd[1405]: calie1e42a174f2: Gained IPv6LL Nov 6 00:23:24.890015 containerd[1518]: time="2025-11-06T00:23:24.889945996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:24.891478 containerd[1518]: time="2025-11-06T00:23:24.891419554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:24.891726 containerd[1518]: time="2025-11-06T00:23:24.891544360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:24.892034 kubelet[2773]: E1106 00:23:24.891966 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:24.892034 kubelet[2773]: E1106 00:23:24.892029 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:24.892582 kubelet[2773]: E1106 00:23:24.892238 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mrx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-8j9rh_calico-apiserver(0495b935-824c-48f0-99f7-45ec9b94fbf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:24.893808 kubelet[2773]: E1106 00:23:24.893518 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:25.182191 systemd-networkd[1405]: cali385de09ca18: Gained IPv6LL Nov 6 00:23:25.444243 kubelet[2773]: E1106 00:23:25.443899 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:25.445924 kubelet[2773]: E1106 00:23:25.445046 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:26.334966 systemd-networkd[1405]: calie3027b6af69: Gained IPv6LL Nov 6 00:23:26.447367 kubelet[2773]: E1106 00:23:26.447316 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:28.613222 ntpd[1653]: Listen normally on 6 vxlan.calico 192.168.16.0:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 6 vxlan.calico 192.168.16.0:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 7 cali8d82c68210b [fe80::ecee:eeff:feee:eeee%4]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 8 vxlan.calico [fe80::6457:b1ff:fe5b:7209%5]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 9 cali4e33eba6811 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 10 caliae6c17a408a [fe80::ecee:eeff:feee:eeee%9]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 11 cali4c83f3b4db4 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 12 cali2cbbaeda1b5 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 13 cali385de09ca18 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 14 calie1e42a174f2 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 6 00:23:28.613950 ntpd[1653]: 6 Nov 00:23:28 ntpd[1653]: Listen normally on 15 calie3027b6af69 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 6 00:23:28.613310 ntpd[1653]: Listen normally on 7 cali8d82c68210b [fe80::ecee:eeff:feee:eeee%4]:123 Nov 6 00:23:28.613353 ntpd[1653]: Listen normally on 8 vxlan.calico [fe80::6457:b1ff:fe5b:7209%5]:123 Nov 6 00:23:28.613395 ntpd[1653]: Listen normally on 9 cali4e33eba6811 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 6 00:23:28.613452 ntpd[1653]: Listen normally on 10 caliae6c17a408a [fe80::ecee:eeff:feee:eeee%9]:123 Nov 6 00:23:28.613493 ntpd[1653]: Listen normally on 11 cali4c83f3b4db4 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 6 00:23:28.613534 ntpd[1653]: Listen normally on 12 cali2cbbaeda1b5 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 6 00:23:28.613573 ntpd[1653]: Listen normally on 13 cali385de09ca18 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 6 00:23:28.613809 ntpd[1653]: Listen normally on 14 calie1e42a174f2 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 6 00:23:28.613857 ntpd[1653]: Listen normally on 15 calie3027b6af69 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 6 00:23:31.126457 containerd[1518]: time="2025-11-06T00:23:31.126392180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:23:31.285169 containerd[1518]: time="2025-11-06T00:23:31.285063460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:31.286670 containerd[1518]: time="2025-11-06T00:23:31.286616826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:23:31.286670 containerd[1518]: time="2025-11-06T00:23:31.286620591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:23:31.286968 kubelet[2773]: E1106 00:23:31.286919 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:31.287457 kubelet[2773]: E1106 00:23:31.286981 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:31.287457 kubelet[2773]: E1106 00:23:31.287174 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d89eb6d345984e0dbfc52267d62dcbe4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:31.290203 containerd[1518]: time="2025-11-06T00:23:31.290151708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:23:31.466522 containerd[1518]: time="2025-11-06T00:23:31.466349058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:31.468039 containerd[1518]: time="2025-11-06T00:23:31.467980822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:23:31.468326 containerd[1518]: time="2025-11-06T00:23:31.468009604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:31.468599 kubelet[2773]: E1106 00:23:31.468231 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:31.468599 kubelet[2773]: E1106 00:23:31.468281 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:31.468599 kubelet[2773]: E1106 00:23:31.468478 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:31.470287 kubelet[2773]: E1106 00:23:31.470194 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:23:34.126216 containerd[1518]: time="2025-11-06T00:23:34.125795998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:23:34.288588 containerd[1518]: time="2025-11-06T00:23:34.288522786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:34.290136 containerd[1518]: time="2025-11-06T00:23:34.290065540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:23:34.290296 containerd[1518]: time="2025-11-06T00:23:34.290063529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:34.290452 kubelet[2773]: E1106 00:23:34.290374 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:34.291026 kubelet[2773]: E1106 00:23:34.290470 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:34.291490 kubelet[2773]: E1106 00:23:34.291063 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwrtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5486c85ff6-xm98d_calico-system(5d385245-7d9d-431f-b9ed-020a695bf7cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:34.292860 kubelet[2773]: E1106 00:23:34.292710 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:37.125609 containerd[1518]: time="2025-11-06T00:23:37.125254662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:37.288742 containerd[1518]: time="2025-11-06T00:23:37.288665076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:37.290377 containerd[1518]: time="2025-11-06T00:23:37.290312729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:37.290377 containerd[1518]: time="2025-11-06T00:23:37.290328678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:37.290719 kubelet[2773]: E1106 00:23:37.290647 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:37.291956 kubelet[2773]: E1106 00:23:37.290734 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:37.291956 kubelet[2773]: E1106 00:23:37.291073 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mhl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-v7bgg_calico-apiserver(d2b7456c-b14f-487f-b6a5-068ef90c8b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:37.292656 containerd[1518]: time="2025-11-06T00:23:37.291306766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:23:37.293022 kubelet[2773]: E1106 00:23:37.292636 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:37.454360 containerd[1518]: time="2025-11-06T00:23:37.454184982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:37.455701 containerd[1518]: time="2025-11-06T00:23:37.455571761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:23:37.455701 containerd[1518]: time="2025-11-06T00:23:37.455621766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:37.456029 kubelet[2773]: E1106 00:23:37.455974 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:37.456176 kubelet[2773]: E1106 00:23:37.456060 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:37.456718 kubelet[2773]: E1106 00:23:37.456352 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65l6k_calico-system(7fe53f63-7b33-45ac-b5f4-f8e84eb05683): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:37.458185 kubelet[2773]: E1106 00:23:37.458128 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:39.125883 containerd[1518]: time="2025-11-06T00:23:39.125813403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:23:39.283613 containerd[1518]: time="2025-11-06T00:23:39.283555172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:39.285093 containerd[1518]: time="2025-11-06T00:23:39.285039196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:23:39.285273 containerd[1518]: time="2025-11-06T00:23:39.285044446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:23:39.285433 kubelet[2773]: E1106 00:23:39.285315 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:39.285433 kubelet[2773]: E1106 00:23:39.285379 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:39.286424 kubelet[2773]: E1106 00:23:39.286322 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:39.289683 containerd[1518]: time="2025-11-06T00:23:39.289650451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:23:39.453358 containerd[1518]: time="2025-11-06T00:23:39.453191422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:39.454651 containerd[1518]: time="2025-11-06T00:23:39.454593346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:23:39.454839 containerd[1518]: time="2025-11-06T00:23:39.454623490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:23:39.455043 kubelet[2773]: E1106 00:23:39.454953 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:39.455149 kubelet[2773]: E1106 00:23:39.455059 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:39.455341 kubelet[2773]: E1106 00:23:39.455246 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:39.456902 kubelet[2773]: E1106 00:23:39.456852 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:41.126246 containerd[1518]: time="2025-11-06T00:23:41.124978970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:41.287493 containerd[1518]: time="2025-11-06T00:23:41.287419528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:41.289127 containerd[1518]: time="2025-11-06T00:23:41.289032755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:41.289359 containerd[1518]: time="2025-11-06T00:23:41.289042976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:41.289446 kubelet[2773]: E1106 00:23:41.289334 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:41.289446 kubelet[2773]: E1106 00:23:41.289391 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:41.290097 kubelet[2773]: E1106 00:23:41.289573 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mrx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-8j9rh_calico-apiserver(0495b935-824c-48f0-99f7-45ec9b94fbf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:41.291282 kubelet[2773]: E1106 00:23:41.291216 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:45.127014 kubelet[2773]: E1106 00:23:45.126946 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:23:48.125465 kubelet[2773]: E1106 00:23:48.125386 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:23:48.457249 containerd[1518]: time="2025-11-06T00:23:48.457076719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\" id:\"c782a20bb77baccf11797b9f43abea60e21ee27dadfdc5ad69617b638e7f3c62\" pid:4871 exited_at:{seconds:1762388628 nanos:456167690}" Nov 6 00:23:50.127117 kubelet[2773]: E1106 00:23:50.125865 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:23:52.127679 kubelet[2773]: E1106 00:23:52.127620 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:23:53.127992 kubelet[2773]: E1106 00:23:53.127906 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:23:53.394282 systemd[1]: Started sshd@7-10.128.0.9:22-147.75.109.163:53540.service - OpenSSH per-connection server daemon (147.75.109.163:53540). Nov 6 00:23:53.732642 sshd[4886]: Accepted publickey for core from 147.75.109.163 port 53540 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:23:53.736861 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:53.748828 systemd-logind[1500]: New session 8 of user core. Nov 6 00:23:53.755054 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:23:54.113912 sshd[4889]: Connection closed by 147.75.109.163 port 53540 Nov 6 00:23:54.116331 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:54.124146 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:23:54.125544 systemd[1]: sshd@7-10.128.0.9:22-147.75.109.163:53540.service: Deactivated successfully. Nov 6 00:23:54.130459 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:23:54.137536 systemd-logind[1500]: Removed session 8. Nov 6 00:23:56.130955 kubelet[2773]: E1106 00:23:56.130888 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:23:58.136462 containerd[1518]: time="2025-11-06T00:23:58.136292520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:23:58.315973 containerd[1518]: time="2025-11-06T00:23:58.315918190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:58.317471 containerd[1518]: time="2025-11-06T00:23:58.317410159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:23:58.317648 containerd[1518]: time="2025-11-06T00:23:58.317533517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:23:58.317798 kubelet[2773]: E1106 00:23:58.317740 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:58.319396 kubelet[2773]: E1106 00:23:58.317826 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:58.319396 kubelet[2773]: E1106 00:23:58.318017 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d89eb6d345984e0dbfc52267d62dcbe4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:58.322655 containerd[1518]: time="2025-11-06T00:23:58.322614587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:23:58.487878 containerd[1518]: time="2025-11-06T00:23:58.487713369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:58.489485 containerd[1518]: time="2025-11-06T00:23:58.489385569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:23:58.489646 containerd[1518]: time="2025-11-06T00:23:58.489444093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:58.489921 kubelet[2773]: E1106 00:23:58.489863 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:58.490634 kubelet[2773]: E1106 00:23:58.489938 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:58.490634 kubelet[2773]: E1106 00:23:58.490124 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdgxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f67c8cdb4-zm6bl_calico-system(927ec230-fe67-4f72-91a2-11014246002e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:58.492151 kubelet[2773]: E1106 00:23:58.492002 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:23:59.170003 systemd[1]: Started sshd@8-10.128.0.9:22-147.75.109.163:53556.service - OpenSSH per-connection server daemon (147.75.109.163:53556). Nov 6 00:23:59.483577 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 53556 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:23:59.485845 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:59.492839 systemd-logind[1500]: New session 9 of user core. Nov 6 00:23:59.497961 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:23:59.832801 sshd[4911]: Connection closed by 147.75.109.163 port 53556 Nov 6 00:23:59.833846 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:59.841658 systemd[1]: sshd@8-10.128.0.9:22-147.75.109.163:53556.service: Deactivated successfully. Nov 6 00:23:59.847909 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:23:59.850033 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:23:59.852692 systemd-logind[1500]: Removed session 9. Nov 6 00:24:01.127788 containerd[1518]: time="2025-11-06T00:24:01.127546977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:24:01.283275 containerd[1518]: time="2025-11-06T00:24:01.283215840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:01.284904 containerd[1518]: time="2025-11-06T00:24:01.284844386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:24:01.285087 containerd[1518]: time="2025-11-06T00:24:01.285050288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:01.285496 kubelet[2773]: E1106 00:24:01.285425 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:01.286507 kubelet[2773]: E1106 00:24:01.285494 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:01.286507 kubelet[2773]: E1106 00:24:01.285698 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwrtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5486c85ff6-xm98d_calico-system(5d385245-7d9d-431f-b9ed-020a695bf7cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:01.287018 kubelet[2773]: E1106 00:24:01.286977 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:24:02.127470 containerd[1518]: time="2025-11-06T00:24:02.127401738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:02.302003 containerd[1518]: time="2025-11-06T00:24:02.301939065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:02.303580 containerd[1518]: time="2025-11-06T00:24:02.303505195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:02.304003 containerd[1518]: time="2025-11-06T00:24:02.303530453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:02.304145 kubelet[2773]: E1106 00:24:02.304078 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:02.304145 kubelet[2773]: E1106 00:24:02.304139 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:02.305521 kubelet[2773]: E1106 00:24:02.304324 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mhl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-v7bgg_calico-apiserver(d2b7456c-b14f-487f-b6a5-068ef90c8b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:02.306120 kubelet[2773]: E1106 00:24:02.306077 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:24:03.128056 containerd[1518]: time="2025-11-06T00:24:03.127983729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:24:03.292629 containerd[1518]: time="2025-11-06T00:24:03.292503082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:03.294524 containerd[1518]: time="2025-11-06T00:24:03.294461416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:24:03.294664 containerd[1518]: time="2025-11-06T00:24:03.294582897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:03.294932 kubelet[2773]: E1106 00:24:03.294845 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:03.294932 kubelet[2773]: E1106 00:24:03.294918 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:03.295245 kubelet[2773]: E1106 00:24:03.295138 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65l6k_calico-system(7fe53f63-7b33-45ac-b5f4-f8e84eb05683): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:03.296782 kubelet[2773]: E1106 00:24:03.296432 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:24:04.892060 systemd[1]: Started sshd@9-10.128.0.9:22-147.75.109.163:52192.service - OpenSSH per-connection server daemon (147.75.109.163:52192). Nov 6 00:24:05.227939 sshd[4932]: Accepted publickey for core from 147.75.109.163 port 52192 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:05.230214 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:05.238640 systemd-logind[1500]: New session 10 of user core. Nov 6 00:24:05.248153 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:24:05.641634 sshd[4935]: Connection closed by 147.75.109.163 port 52192 Nov 6 00:24:05.642170 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:05.652432 systemd[1]: sshd@9-10.128.0.9:22-147.75.109.163:52192.service: Deactivated successfully. Nov 6 00:24:05.657669 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:24:05.659679 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:24:05.665161 systemd-logind[1500]: Removed session 10. Nov 6 00:24:05.698932 systemd[1]: Started sshd@10-10.128.0.9:22-147.75.109.163:52206.service - OpenSSH per-connection server daemon (147.75.109.163:52206). Nov 6 00:24:06.028538 sshd[4948]: Accepted publickey for core from 147.75.109.163 port 52206 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:06.031941 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:06.042894 systemd-logind[1500]: New session 11 of user core. Nov 6 00:24:06.046010 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:24:06.455927 sshd[4951]: Connection closed by 147.75.109.163 port 52206 Nov 6 00:24:06.455599 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:06.468350 systemd[1]: sshd@10-10.128.0.9:22-147.75.109.163:52206.service: Deactivated successfully. Nov 6 00:24:06.477350 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:24:06.494095 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:24:06.516994 systemd[1]: Started sshd@11-10.128.0.9:22-147.75.109.163:52222.service - OpenSSH per-connection server daemon (147.75.109.163:52222). Nov 6 00:24:06.519196 systemd-logind[1500]: Removed session 11. Nov 6 00:24:06.860170 sshd[4961]: Accepted publickey for core from 147.75.109.163 port 52222 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:06.862328 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:06.870297 systemd-logind[1500]: New session 12 of user core. Nov 6 00:24:06.884090 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:24:07.127654 containerd[1518]: time="2025-11-06T00:24:07.127169747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:24:07.206452 sshd[4964]: Connection closed by 147.75.109.163 port 52222 Nov 6 00:24:07.205139 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:07.214460 systemd[1]: sshd@11-10.128.0.9:22-147.75.109.163:52222.service: Deactivated successfully. Nov 6 00:24:07.221717 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:24:07.225777 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:24:07.228126 systemd-logind[1500]: Removed session 12. Nov 6 00:24:07.298303 containerd[1518]: time="2025-11-06T00:24:07.298242178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:07.300163 containerd[1518]: time="2025-11-06T00:24:07.300012320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:24:07.300163 containerd[1518]: time="2025-11-06T00:24:07.300121684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:24:07.301252 kubelet[2773]: E1106 00:24:07.300519 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:07.301252 kubelet[2773]: E1106 00:24:07.300585 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:07.301252 kubelet[2773]: E1106 00:24:07.300745 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:07.307109 containerd[1518]: time="2025-11-06T00:24:07.307050782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:24:07.516296 containerd[1518]: time="2025-11-06T00:24:07.516123949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:07.517820 containerd[1518]: time="2025-11-06T00:24:07.517729729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:24:07.517965 containerd[1518]: time="2025-11-06T00:24:07.517866317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:24:07.519160 kubelet[2773]: E1106 00:24:07.518095 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:07.519160 kubelet[2773]: E1106 00:24:07.518217 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:07.519160 kubelet[2773]: E1106 00:24:07.518646 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgmkz_calico-system(70152e9b-de49-41f1-96dc-b8cd479787b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:07.520054 kubelet[2773]: E1106 00:24:07.519996 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:24:10.128709 containerd[1518]: time="2025-11-06T00:24:10.128648680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:10.291460 containerd[1518]: time="2025-11-06T00:24:10.291398849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:10.292920 containerd[1518]: time="2025-11-06T00:24:10.292861253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:10.293067 containerd[1518]: time="2025-11-06T00:24:10.292994474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:10.293362 kubelet[2773]: E1106 00:24:10.293289 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:10.293892 kubelet[2773]: E1106 00:24:10.293377 2773 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:10.294782 kubelet[2773]: E1106 00:24:10.294687 2773 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mrx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74b877ccb8-8j9rh_calico-apiserver(0495b935-824c-48f0-99f7-45ec9b94fbf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:10.295966 kubelet[2773]: E1106 00:24:10.295900 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:24:12.131455 kubelet[2773]: E1106 00:24:12.129940 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:24:12.132586 kubelet[2773]: E1106 00:24:12.130145 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:24:12.265250 systemd[1]: Started sshd@12-10.128.0.9:22-147.75.109.163:50416.service - OpenSSH per-connection server daemon (147.75.109.163:50416). Nov 6 00:24:12.605154 sshd[4978]: Accepted publickey for core from 147.75.109.163 port 50416 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:12.608722 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:12.617640 systemd-logind[1500]: New session 13 of user core. Nov 6 00:24:12.623978 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:24:12.977879 sshd[4985]: Connection closed by 147.75.109.163 port 50416 Nov 6 00:24:12.979084 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:12.989825 systemd[1]: sshd@12-10.128.0.9:22-147.75.109.163:50416.service: Deactivated successfully. Nov 6 00:24:12.996512 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:24:13.001786 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:24:13.007840 systemd-logind[1500]: Removed session 13. Nov 6 00:24:15.125943 kubelet[2773]: E1106 00:24:15.125334 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:24:15.125943 kubelet[2773]: E1106 00:24:15.125509 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:24:18.033410 systemd[1]: Started sshd@13-10.128.0.9:22-147.75.109.163:50420.service - OpenSSH per-connection server daemon (147.75.109.163:50420). Nov 6 00:24:18.373985 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 50420 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:18.379836 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:18.396146 systemd-logind[1500]: New session 14 of user core. Nov 6 00:24:18.401065 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:24:18.571630 containerd[1518]: time="2025-11-06T00:24:18.570611392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4557950409099f4d0c4b781f59bac7c05a4f2b797d63060af615e20afcb861d\" id:\"40cf21f90897696e0adbd4c397b14bffb4116155b594b1c987f00225a1b05493\" pid:5011 exited_at:{seconds:1762388658 nanos:564930244}" Nov 6 00:24:18.732801 sshd[5016]: Connection closed by 147.75.109.163 port 50420 Nov 6 00:24:18.734572 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:18.744708 systemd[1]: sshd@13-10.128.0.9:22-147.75.109.163:50420.service: Deactivated successfully. Nov 6 00:24:18.749566 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:24:18.752797 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:24:18.756285 systemd-logind[1500]: Removed session 14. Nov 6 00:24:21.128485 kubelet[2773]: E1106 00:24:21.128423 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:24:23.135347 kubelet[2773]: E1106 00:24:23.135091 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:24:23.790039 systemd[1]: Started sshd@14-10.128.0.9:22-147.75.109.163:46770.service - OpenSSH per-connection server daemon (147.75.109.163:46770). Nov 6 00:24:24.129293 sshd[5036]: Accepted publickey for core from 147.75.109.163 port 46770 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:24.133395 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.145054 systemd-logind[1500]: New session 15 of user core. Nov 6 00:24:24.153926 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:24:24.490804 sshd[5039]: Connection closed by 147.75.109.163 port 46770 Nov 6 00:24:24.492062 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:24.503746 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:24:24.505387 systemd[1]: sshd@14-10.128.0.9:22-147.75.109.163:46770.service: Deactivated successfully. Nov 6 00:24:24.510214 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:24:24.515818 systemd-logind[1500]: Removed session 15. Nov 6 00:24:24.548879 systemd[1]: Started sshd@15-10.128.0.9:22-147.75.109.163:46778.service - OpenSSH per-connection server daemon (147.75.109.163:46778). Nov 6 00:24:24.882897 sshd[5052]: Accepted publickey for core from 147.75.109.163 port 46778 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:24.885052 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:24.898684 systemd-logind[1500]: New session 16 of user core. Nov 6 00:24:24.902434 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:24:25.344007 sshd[5055]: Connection closed by 147.75.109.163 port 46778 Nov 6 00:24:25.346499 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:25.353992 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:24:25.354966 systemd[1]: sshd@15-10.128.0.9:22-147.75.109.163:46778.service: Deactivated successfully. Nov 6 00:24:25.359145 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:24:25.364674 systemd-logind[1500]: Removed session 16. Nov 6 00:24:25.404028 systemd[1]: Started sshd@16-10.128.0.9:22-147.75.109.163:46794.service - OpenSSH per-connection server daemon (147.75.109.163:46794). Nov 6 00:24:25.729415 sshd[5065]: Accepted publickey for core from 147.75.109.163 port 46794 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:25.731596 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:25.741302 systemd-logind[1500]: New session 17 of user core. Nov 6 00:24:25.746188 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:24:26.127408 kubelet[2773]: E1106 00:24:26.127350 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:24:26.128642 kubelet[2773]: E1106 00:24:26.128185 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:24:26.129480 kubelet[2773]: E1106 00:24:26.129324 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:24:26.934894 sshd[5068]: Connection closed by 147.75.109.163 port 46794 Nov 6 00:24:26.935987 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:26.949003 systemd[1]: sshd@16-10.128.0.9:22-147.75.109.163:46794.service: Deactivated successfully. Nov 6 00:24:26.955343 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:24:26.963499 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:24:26.965771 systemd-logind[1500]: Removed session 17. Nov 6 00:24:26.993237 systemd[1]: Started sshd@17-10.128.0.9:22-147.75.109.163:46800.service - OpenSSH per-connection server daemon (147.75.109.163:46800). Nov 6 00:24:27.126837 kubelet[2773]: E1106 00:24:27.126692 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:24:27.342255 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 46800 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:27.345497 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:27.355351 systemd-logind[1500]: New session 18 of user core. Nov 6 00:24:27.364202 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:24:27.958508 sshd[5089]: Connection closed by 147.75.109.163 port 46800 Nov 6 00:24:27.959097 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:27.968701 systemd[1]: sshd@17-10.128.0.9:22-147.75.109.163:46800.service: Deactivated successfully. Nov 6 00:24:27.974912 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:24:27.977358 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:24:27.980195 systemd-logind[1500]: Removed session 18. Nov 6 00:24:28.018881 systemd[1]: Started sshd@18-10.128.0.9:22-147.75.109.163:46806.service - OpenSSH per-connection server daemon (147.75.109.163:46806). Nov 6 00:24:28.351240 sshd[5099]: Accepted publickey for core from 147.75.109.163 port 46806 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:28.355151 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:28.365037 systemd-logind[1500]: New session 19 of user core. Nov 6 00:24:28.371946 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:24:28.661901 sshd[5102]: Connection closed by 147.75.109.163 port 46806 Nov 6 00:24:28.662726 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:28.668884 systemd[1]: sshd@18-10.128.0.9:22-147.75.109.163:46806.service: Deactivated successfully. Nov 6 00:24:28.672055 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:24:28.673514 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:24:28.676182 systemd-logind[1500]: Removed session 19. Nov 6 00:24:32.129462 kubelet[2773]: E1106 00:24:32.129290 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:24:33.724598 systemd[1]: Started sshd@19-10.128.0.9:22-147.75.109.163:52572.service - OpenSSH per-connection server daemon (147.75.109.163:52572). Nov 6 00:24:34.070874 sshd[5115]: Accepted publickey for core from 147.75.109.163 port 52572 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:34.073029 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:34.086086 systemd-logind[1500]: New session 20 of user core. Nov 6 00:24:34.093513 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:24:34.458291 sshd[5118]: Connection closed by 147.75.109.163 port 52572 Nov 6 00:24:34.460057 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:34.467531 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:24:34.469801 systemd[1]: sshd@19-10.128.0.9:22-147.75.109.163:52572.service: Deactivated successfully. Nov 6 00:24:34.477749 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:24:34.484655 systemd-logind[1500]: Removed session 20. Nov 6 00:24:37.126711 kubelet[2773]: E1106 00:24:37.126080 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-8j9rh" podUID="0495b935-824c-48f0-99f7-45ec9b94fbf9" Nov 6 00:24:38.129025 kubelet[2773]: E1106 00:24:38.128957 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f67c8cdb4-zm6bl" podUID="927ec230-fe67-4f72-91a2-11014246002e" Nov 6 00:24:39.126956 kubelet[2773]: E1106 00:24:39.126874 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74b877ccb8-v7bgg" podUID="d2b7456c-b14f-487f-b6a5-068ef90c8b4d" Nov 6 00:24:39.521292 systemd[1]: Started sshd@20-10.128.0.9:22-147.75.109.163:52584.service - OpenSSH per-connection server daemon (147.75.109.163:52584). Nov 6 00:24:39.867268 sshd[5135]: Accepted publickey for core from 147.75.109.163 port 52584 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:39.869955 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:39.881770 systemd-logind[1500]: New session 21 of user core. Nov 6 00:24:39.886561 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:24:40.128891 kubelet[2773]: E1106 00:24:40.128712 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65l6k" podUID="7fe53f63-7b33-45ac-b5f4-f8e84eb05683" Nov 6 00:24:40.204317 sshd[5144]: Connection closed by 147.75.109.163 port 52584 Nov 6 00:24:40.205979 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:40.219291 systemd[1]: sshd@20-10.128.0.9:22-147.75.109.163:52584.service: Deactivated successfully. Nov 6 00:24:40.225617 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:24:40.228673 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:24:40.233272 systemd-logind[1500]: Removed session 21. Nov 6 00:24:41.125472 kubelet[2773]: E1106 00:24:41.125416 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5486c85ff6-xm98d" podUID="5d385245-7d9d-431f-b9ed-020a695bf7cd" Nov 6 00:24:45.125806 kubelet[2773]: E1106 00:24:45.125724 2773 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgmkz" podUID="70152e9b-de49-41f1-96dc-b8cd479787b2" Nov 6 00:24:45.259987 systemd[1]: Started sshd@21-10.128.0.9:22-147.75.109.163:56680.service - OpenSSH per-connection server daemon (147.75.109.163:56680). Nov 6 00:24:45.589794 sshd[5160]: Accepted publickey for core from 147.75.109.163 port 56680 ssh2: RSA SHA256:1rgWRkq/AGoNC8pJ+EoO6/JehKPnyepWBQAZJa/eZsU Nov 6 00:24:45.591573 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:45.599879 systemd-logind[1500]: New session 22 of user core. Nov 6 00:24:45.606963 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:24:45.937998 sshd[5163]: Connection closed by 147.75.109.163 port 56680 Nov 6 00:24:45.942492 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:45.950374 systemd[1]: sshd@21-10.128.0.9:22-147.75.109.163:56680.service: Deactivated successfully. Nov 6 00:24:45.956244 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:24:45.959685 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:24:45.965289 systemd-logind[1500]: Removed session 22.