Oct 30 00:05:28.944440 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:07:32 -00 2025 Oct 30 00:05:28.944462 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:05:28.944486 kernel: BIOS-provided physical RAM map: Oct 30 00:05:28.944493 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 00:05:28.944499 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 30 00:05:28.944505 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Oct 30 00:05:28.944512 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Oct 30 00:05:28.944518 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Oct 30 00:05:28.944524 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Oct 30 00:05:28.944531 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 30 00:05:28.944537 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 30 00:05:28.944543 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 30 00:05:28.944549 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 30 00:05:28.944555 kernel: printk: legacy bootconsole [earlyser0] enabled Oct 30 00:05:28.944563 kernel: NX (Execute Disable) protection: active Oct 30 00:05:28.944571 kernel: APIC: Static calls initialized Oct 30 00:05:28.944577 kernel: efi: EFI v2.7 by Microsoft Oct 30 00:05:28.944583 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Oct 30 00:05:28.944590 kernel: random: crng init done Oct 30 00:05:28.944596 kernel: secureboot: Secure boot disabled Oct 30 00:05:28.944603 kernel: SMBIOS 3.1.0 present. Oct 30 00:05:28.944609 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Oct 30 00:05:28.944616 kernel: DMI: Memory slots populated: 2/2 Oct 30 00:05:28.944622 kernel: Hypervisor detected: Microsoft Hyper-V Oct 30 00:05:28.944629 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Oct 30 00:05:28.944635 kernel: Hyper-V: Nested features: 0x3e0101 Oct 30 00:05:28.944642 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 30 00:05:28.944649 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 30 00:05:28.944655 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 30 00:05:28.944661 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 30 00:05:28.944668 kernel: tsc: Detected 2299.998 MHz processor Oct 30 00:05:28.944674 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:05:28.944681 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:05:28.944688 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Oct 30 00:05:28.944695 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 30 00:05:28.944703 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:05:28.944710 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Oct 30 00:05:28.944717 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Oct 30 00:05:28.944724 kernel: Using GB pages for direct mapping Oct 30 00:05:28.944731 kernel: ACPI: Early table checksum verification disabled Oct 30 00:05:28.944740 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 30 00:05:28.944748 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944756 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944763 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 30 00:05:28.944769 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 30 00:05:28.944777 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944784 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944791 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944798 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 30 00:05:28.944806 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Oct 30 00:05:28.944814 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 00:05:28.944821 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 30 00:05:28.944829 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Oct 30 00:05:28.944836 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 30 00:05:28.944843 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 30 00:05:28.944849 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 30 00:05:28.944856 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 30 00:05:28.944863 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Oct 30 00:05:28.944871 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Oct 30 00:05:28.944877 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 30 00:05:28.944884 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Oct 30 00:05:28.944891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Oct 30 00:05:28.944898 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Oct 30 00:05:28.944905 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Oct 30 00:05:28.944911 kernel: Zone ranges: Oct 30 00:05:28.944918 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:05:28.944925 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 30 00:05:28.944933 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 30 00:05:28.944940 kernel: Device empty Oct 30 00:05:28.944947 kernel: Movable zone start for each node Oct 30 00:05:28.944954 kernel: Early memory node ranges Oct 30 00:05:28.944960 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 30 00:05:28.944967 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Oct 30 00:05:28.944974 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Oct 30 00:05:28.944980 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 30 00:05:28.944987 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 30 00:05:28.944995 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 30 00:05:28.945002 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:05:28.945008 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 30 00:05:28.945015 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Oct 30 00:05:28.945021 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Oct 30 00:05:28.945028 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 30 00:05:28.945035 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:05:28.945042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:05:28.945048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:05:28.945057 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 30 00:05:28.945063 kernel: TSC deadline timer available Oct 30 00:05:28.945070 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:05:28.945077 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:05:28.945083 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:05:28.945090 kernel: CPU topo: Max. threads per core: 2 Oct 30 00:05:28.945096 kernel: CPU topo: Num. cores per package: 1 Oct 30 00:05:28.945103 kernel: CPU topo: Num. threads per package: 2 Oct 30 00:05:28.945110 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 30 00:05:28.945118 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 30 00:05:28.945124 kernel: Booting paravirtualized kernel on Hyper-V Oct 30 00:05:28.945131 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:05:28.945138 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 30 00:05:28.945145 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 30 00:05:28.945151 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 30 00:05:28.945158 kernel: pcpu-alloc: [0] 0 1 Oct 30 00:05:28.945164 kernel: Hyper-V: PV spinlocks enabled Oct 30 00:05:28.945171 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 00:05:28.945182 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:05:28.945190 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 30 00:05:28.945197 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 00:05:28.945204 kernel: Fallback order for Node 0: 0 Oct 30 00:05:28.945209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Oct 30 00:05:28.945215 kernel: Policy zone: Normal Oct 30 00:05:28.945220 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:05:28.945226 kernel: software IO TLB: area num 2. Oct 30 00:05:28.945233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 00:05:28.945238 kernel: ftrace: allocating 40021 entries in 157 pages Oct 30 00:05:28.945244 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:05:28.945250 kernel: Dynamic Preempt: voluntary Oct 30 00:05:28.945256 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:05:28.945262 kernel: rcu: RCU event tracing is enabled. Oct 30 00:05:28.945269 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 00:05:28.945281 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:05:28.945288 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:05:28.945295 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:05:28.945302 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:05:28.945308 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 00:05:28.945317 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:05:28.945324 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:05:28.945331 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:05:28.945338 kernel: Using NULL legacy PIC Oct 30 00:05:28.945345 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 30 00:05:28.945353 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:05:28.945359 kernel: Console: colour dummy device 80x25 Oct 30 00:05:28.945366 kernel: printk: legacy console [tty1] enabled Oct 30 00:05:28.945373 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:05:28.945379 kernel: printk: legacy bootconsole [earlyser0] disabled Oct 30 00:05:28.945386 kernel: ACPI: Core revision 20240827 Oct 30 00:05:28.945393 kernel: Failed to register legacy timer interrupt Oct 30 00:05:28.945400 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:05:28.945407 kernel: x2apic enabled Oct 30 00:05:28.945416 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:05:28.945423 kernel: Hyper-V: Host Build 10.0.26100.1381-1-0 Oct 30 00:05:28.945430 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 30 00:05:28.945437 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Oct 30 00:05:28.945444 kernel: Hyper-V: Using IPI hypercalls Oct 30 00:05:28.945451 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 30 00:05:28.945458 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 30 00:05:28.945465 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 30 00:05:28.945518 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 30 00:05:28.945527 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 30 00:05:28.945534 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 30 00:05:28.945541 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 30 00:05:28.945548 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Oct 30 00:05:28.945555 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 00:05:28.945562 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 30 00:05:28.945570 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 30 00:05:28.945577 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:05:28.945583 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 00:05:28.945590 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:05:28.945599 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 30 00:05:28.945606 kernel: RETBleed: Vulnerable Oct 30 00:05:28.945613 kernel: Speculative Store Bypass: Vulnerable Oct 30 00:05:28.945620 kernel: active return thunk: its_return_thunk Oct 30 00:05:28.945627 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 30 00:05:28.945634 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:05:28.945641 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:05:28.945648 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:05:28.945656 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 30 00:05:28.945663 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 30 00:05:28.945671 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 30 00:05:28.945677 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Oct 30 00:05:28.945684 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Oct 30 00:05:28.945690 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Oct 30 00:05:28.945697 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:05:28.945703 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 30 00:05:28.945709 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 30 00:05:28.945715 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 30 00:05:28.945721 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Oct 30 00:05:28.945728 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Oct 30 00:05:28.945734 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Oct 30 00:05:28.945743 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Oct 30 00:05:28.945750 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:05:28.945757 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:05:28.945765 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:05:28.945772 kernel: landlock: Up and running. Oct 30 00:05:28.945779 kernel: SELinux: Initializing. Oct 30 00:05:28.945786 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:05:28.945793 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:05:28.945801 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Oct 30 00:05:28.945809 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Oct 30 00:05:28.945816 kernel: signal: max sigframe size: 11952 Oct 30 00:05:28.945826 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:05:28.945833 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:05:28.945841 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:05:28.945848 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 30 00:05:28.945856 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:05:28.945864 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:05:28.945871 kernel: .... node #0, CPUs: #1 Oct 30 00:05:28.945879 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 00:05:28.945886 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 30 00:05:28.945895 kernel: Memory: 8070880K/8383228K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45544K init, 1184K bss, 306132K reserved, 0K cma-reserved) Oct 30 00:05:28.945903 kernel: devtmpfs: initialized Oct 30 00:05:28.945911 kernel: x86/mm: Memory block size: 128MB Oct 30 00:05:28.945919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 30 00:05:28.945927 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:05:28.945933 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 00:05:28.945941 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:05:28.946115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:05:28.946125 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:05:28.946134 kernel: audit: type=2000 audit(1761782725.028:1): state=initialized audit_enabled=0 res=1 Oct 30 00:05:28.946141 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:05:28.946149 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:05:28.946156 kernel: cpuidle: using governor menu Oct 30 00:05:28.946164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:05:28.946172 kernel: dca service started, version 1.12.1 Oct 30 00:05:28.946179 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Oct 30 00:05:28.946186 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Oct 30 00:05:28.946193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:05:28.946202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 00:05:28.946210 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 00:05:28.946217 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:05:28.946225 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:05:28.946232 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:05:28.946239 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:05:28.946247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:05:28.946254 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 00:05:28.946261 kernel: ACPI: Interpreter enabled Oct 30 00:05:28.946269 kernel: ACPI: PM: (supports S0 S5) Oct 30 00:05:28.946277 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:05:28.946284 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:05:28.946291 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 30 00:05:28.946298 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 30 00:05:28.946305 kernel: iommu: Default domain type: Translated Oct 30 00:05:28.946314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:05:28.946324 kernel: efivars: Registered efivars operations Oct 30 00:05:28.946333 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:05:28.946344 kernel: PCI: System does not support PCI Oct 30 00:05:28.946352 kernel: vgaarb: loaded Oct 30 00:05:28.946361 kernel: clocksource: Switched to clocksource tsc-early Oct 30 00:05:28.946370 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:05:28.946379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:05:28.946388 kernel: pnp: PnP ACPI init Oct 30 00:05:28.946396 kernel: pnp: PnP ACPI: found 3 devices Oct 30 00:05:28.946405 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:05:28.946414 kernel: NET: Registered PF_INET protocol family Oct 30 00:05:28.946425 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 30 00:05:28.946433 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 30 00:05:28.946442 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:05:28.946451 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 00:05:28.946460 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 30 00:05:28.946486 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 30 00:05:28.946495 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 30 00:05:28.946504 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 30 00:05:28.946513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:05:28.946523 kernel: NET: Registered PF_XDP protocol family Oct 30 00:05:28.946532 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:05:28.946541 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 30 00:05:28.946549 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Oct 30 00:05:28.946558 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Oct 30 00:05:28.946567 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Oct 30 00:05:28.946576 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 30 00:05:28.946585 kernel: clocksource: Switched to clocksource tsc Oct 30 00:05:28.946593 kernel: Initialise system trusted keyrings Oct 30 00:05:28.946604 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 30 00:05:28.946612 kernel: Key type asymmetric registered Oct 30 00:05:28.946621 kernel: Asymmetric key parser 'x509' registered Oct 30 00:05:28.946629 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:05:28.946638 kernel: io scheduler mq-deadline registered Oct 30 00:05:28.946646 kernel: io scheduler kyber registered Oct 30 00:05:28.946654 kernel: io scheduler bfq registered Oct 30 00:05:28.946663 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:05:28.946672 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:05:28.946682 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:05:28.946691 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 30 00:05:28.946700 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:05:28.946708 kernel: i8042: PNP: No PS/2 controller found. Oct 30 00:05:28.946825 kernel: rtc_cmos 00:02: registered as rtc0 Oct 30 00:05:28.946895 kernel: rtc_cmos 00:02: setting system clock to 2025-10-30T00:05:28 UTC (1761782728) Oct 30 00:05:28.946959 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 30 00:05:28.946972 kernel: intel_pstate: Intel P-state driver initializing Oct 30 00:05:28.946981 kernel: efifb: probing for efifb Oct 30 00:05:28.946990 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 30 00:05:28.946999 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 30 00:05:28.947007 kernel: efifb: scrolling: redraw Oct 30 00:05:28.947017 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 30 00:05:28.947026 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 00:05:28.947033 kernel: fb0: EFI VGA frame buffer device Oct 30 00:05:28.947041 kernel: pstore: Using crash dump compression: deflate Oct 30 00:05:28.947051 kernel: pstore: Registered efi_pstore as persistent store backend Oct 30 00:05:28.947058 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:05:28.947066 kernel: Segment Routing with IPv6 Oct 30 00:05:28.947073 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:05:28.947081 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:05:28.947089 kernel: Key type dns_resolver registered Oct 30 00:05:28.947096 kernel: IPI shorthand broadcast: enabled Oct 30 00:05:28.947104 kernel: sched_clock: Marking stable (2567003913, 84036468)->(2936913972, -285873591) Oct 30 00:05:28.947111 kernel: registered taskstats version 1 Oct 30 00:05:28.947119 kernel: Loading compiled-in X.509 certificates Oct 30 00:05:28.947128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 815fc40077fbc06b8d9e8a6016fea83aecff0a2a' Oct 30 00:05:28.947136 kernel: Demotion targets for Node 0: null Oct 30 00:05:28.947144 kernel: Key type .fscrypt registered Oct 30 00:05:28.947151 kernel: Key type fscrypt-provisioning registered Oct 30 00:05:28.947159 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 00:05:28.947167 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:05:28.947174 kernel: ima: No architecture policies found Oct 30 00:05:28.947182 kernel: clk: Disabling unused clocks Oct 30 00:05:28.947189 kernel: Warning: unable to open an initial console. Oct 30 00:05:28.947199 kernel: Freeing unused kernel image (initmem) memory: 45544K Oct 30 00:05:28.947206 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:05:28.947214 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Oct 30 00:05:28.947221 kernel: Run /init as init process Oct 30 00:05:28.947229 kernel: with arguments: Oct 30 00:05:28.947237 kernel: /init Oct 30 00:05:28.947244 kernel: with environment: Oct 30 00:05:28.947251 kernel: HOME=/ Oct 30 00:05:28.947259 kernel: TERM=linux Oct 30 00:05:28.947269 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:05:28.947281 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:05:28.947291 systemd[1]: Detected virtualization microsoft. Oct 30 00:05:28.947299 systemd[1]: Detected architecture x86-64. Oct 30 00:05:28.947307 systemd[1]: Running in initrd. Oct 30 00:05:28.947315 systemd[1]: No hostname configured, using default hostname. Oct 30 00:05:28.947323 systemd[1]: Hostname set to . Oct 30 00:05:28.947333 systemd[1]: Initializing machine ID from random generator. Oct 30 00:05:28.947341 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:05:28.947349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:28.947357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:28.947366 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:05:28.947374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:05:28.947382 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:05:28.947394 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:05:28.947403 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 00:05:28.947411 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 00:05:28.947419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:28.947427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:28.947435 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:05:28.947444 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:05:28.947452 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:05:28.947461 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:05:28.947486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:05:28.947494 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:05:28.947503 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:05:28.947511 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:05:28.947520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:28.947528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:28.947536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:28.947544 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:05:28.947554 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:05:28.947562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:05:28.947570 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:05:28.947579 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:05:28.947587 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:05:28.947595 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:05:28.947603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:05:28.947612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:28.947630 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:05:28.947657 systemd-journald[187]: Collecting audit messages is disabled. Oct 30 00:05:28.947680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:28.947689 systemd-journald[187]: Journal started Oct 30 00:05:28.947732 systemd-journald[187]: Runtime Journal (/run/log/journal/7dd8a74809944a889a492b735029de1c) is 8M, max 158.6M, 150.6M free. Oct 30 00:05:28.951634 systemd-modules-load[188]: Inserted module 'overlay' Oct 30 00:05:28.956484 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:05:28.959749 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:05:28.965569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:05:28.970704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:05:28.984547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:28.993908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:05:28.993927 kernel: Bridge firewalling registered Oct 30 00:05:28.988370 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 30 00:05:28.989019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:28.994339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:05:29.001110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:05:29.003432 systemd-tmpfiles[199]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:05:29.011778 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:05:29.016671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:29.022185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:29.028604 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:05:29.033207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:05:29.037767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:05:29.042760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:05:29.051654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:29.068066 systemd-resolved[221]: Positive Trust Anchors: Oct 30 00:05:29.069720 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:05:29.072478 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:05:29.084959 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:05:29.097672 systemd-resolved[221]: Defaulting to hostname 'linux'. Oct 30 00:05:29.100205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:05:29.104764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:29.132484 kernel: SCSI subsystem initialized Oct 30 00:05:29.138484 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:05:29.146484 kernel: iscsi: registered transport (tcp) Oct 30 00:05:29.161792 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:05:29.161828 kernel: QLogic iSCSI HBA Driver Oct 30 00:05:29.173078 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:05:29.186161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:29.192801 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:05:29.217337 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:05:29.220351 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:05:29.262485 kernel: raid6: avx512x4 gen() 47283 MB/s Oct 30 00:05:29.279478 kernel: raid6: avx512x2 gen() 47506 MB/s Oct 30 00:05:29.296478 kernel: raid6: avx512x1 gen() 29976 MB/s Oct 30 00:05:29.313478 kernel: raid6: avx2x4 gen() 42974 MB/s Oct 30 00:05:29.331479 kernel: raid6: avx2x2 gen() 44739 MB/s Oct 30 00:05:29.349008 kernel: raid6: avx2x1 gen() 35639 MB/s Oct 30 00:05:29.349032 kernel: raid6: using algorithm avx512x2 gen() 47506 MB/s Oct 30 00:05:29.366849 kernel: raid6: .... xor() 37441 MB/s, rmw enabled Oct 30 00:05:29.366875 kernel: raid6: using avx512x2 recovery algorithm Oct 30 00:05:29.383486 kernel: xor: automatically using best checksumming function avx Oct 30 00:05:29.487486 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:05:29.491329 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:05:29.493572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:29.513934 systemd-udevd[435]: Using default interface naming scheme 'v255'. Oct 30 00:05:29.517315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:29.522989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:05:29.543526 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Oct 30 00:05:29.559203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:05:29.560265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:05:29.592260 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:29.598573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:05:29.638499 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:05:29.655496 kernel: hv_vmbus: Vmbus version:5.3 Oct 30 00:05:29.663489 kernel: AES CTR mode by8 optimization enabled Oct 30 00:05:29.666731 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 30 00:05:29.666766 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 30 00:05:29.667331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:29.670014 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:29.674735 kernel: PTP clock support registered Oct 30 00:05:29.675445 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:29.682342 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 30 00:05:29.686394 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 30 00:05:29.686492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:29.695572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:29.695647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:29.701275 kernel: hv_vmbus: registering driver hv_pci Oct 30 00:05:29.703495 kernel: hv_utils: Registering HyperV Utility Driver Oct 30 00:05:29.703524 kernel: hv_vmbus: registering driver hv_utils Oct 30 00:05:29.705742 kernel: hv_utils: Shutdown IC version 3.2 Oct 30 00:05:29.705906 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Oct 30 00:05:29.489264 kernel: hv_utils: Heartbeat IC version 3.0 Oct 30 00:05:29.502853 kernel: hv_utils: TimeSync IC version 4.0 Oct 30 00:05:29.502869 systemd-journald[187]: Time jumped backwards, rotating. Oct 30 00:05:29.502897 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Oct 30 00:05:29.502993 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Oct 30 00:05:29.503636 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Oct 30 00:05:29.503743 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Oct 30 00:05:29.503763 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Oct 30 00:05:29.491356 systemd-resolved[221]: Clock change detected. Flushing caches. Oct 30 00:05:29.491824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:29.523294 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Oct 30 00:05:29.527175 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Oct 30 00:05:29.527349 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 30 00:05:29.542371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:29.547995 kernel: hv_vmbus: registering driver hid_hyperv Oct 30 00:05:29.554475 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 30 00:05:29.554570 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 30 00:05:29.557299 kernel: hv_vmbus: registering driver hv_netvsc Oct 30 00:05:29.568334 kernel: hv_vmbus: registering driver hv_storvsc Oct 30 00:05:29.574313 kernel: scsi host0: storvsc_host_t Oct 30 00:05:29.574437 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff89f (unnamed net_device) (uninitialized): VF slot 1 added Oct 30 00:05:29.577059 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 30 00:05:29.587416 kernel: nvme nvme0: pci function c05b:00:00.0 Oct 30 00:05:29.590461 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Oct 30 00:05:29.735319 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 30 00:05:29.740285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 30 00:05:29.744873 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 30 00:05:29.745040 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 00:05:29.745287 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 30 00:05:29.761295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#25 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 30 00:05:29.776297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 30 00:05:30.001666 kernel: nvme nvme0: using unchecked data buffer Oct 30 00:05:30.226204 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Oct 30 00:05:30.234064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 30 00:05:30.235027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Oct 30 00:05:30.244200 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Oct 30 00:05:30.250663 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:05:30.269299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 30 00:05:30.271663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 30 00:05:30.279357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 30 00:05:30.278241 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:05:30.286688 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:05:30.286931 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:30.295344 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:05:30.302596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:05:30.324620 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:05:30.591977 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Oct 30 00:05:30.592126 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Oct 30 00:05:30.594406 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Oct 30 00:05:30.595941 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Oct 30 00:05:30.600302 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Oct 30 00:05:30.604375 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Oct 30 00:05:30.608446 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Oct 30 00:05:30.610338 kernel: pci 7870:00:00.0: enabling Extended Tags Oct 30 00:05:30.626318 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Oct 30 00:05:30.626466 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Oct 30 00:05:30.630331 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Oct 30 00:05:30.633370 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Oct 30 00:05:30.644287 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Oct 30 00:05:30.644433 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff89f eth0: VF registering: eth1 Oct 30 00:05:30.646750 kernel: mana 7870:00:00.0 eth1: joined to eth0 Oct 30 00:05:30.650298 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Oct 30 00:05:31.276443 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 30 00:05:31.276474 disk-uuid[643]: The operation has completed successfully. Oct 30 00:05:31.332065 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:05:31.332137 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:05:31.359673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 00:05:31.370073 sh[690]: Success Oct 30 00:05:31.399689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:05:31.399733 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:05:31.400768 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:05:31.408306 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 30 00:05:31.636492 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:05:31.640762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 00:05:31.649955 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 00:05:31.663352 kernel: BTRFS: device fsid ad8523d8-35e6-44b9-a685-e8d871101da4 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (703) Oct 30 00:05:31.663384 kernel: BTRFS info (device dm-0): first mount of filesystem ad8523d8-35e6-44b9-a685-e8d871101da4 Oct 30 00:05:31.664517 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:31.965668 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 30 00:05:31.965721 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:05:31.966787 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:05:32.000488 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 00:05:32.003561 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:05:32.009014 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:05:32.012454 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:05:32.016784 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:05:32.042603 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (739) Oct 30 00:05:32.042631 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:05:32.042640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:32.083506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:05:32.088142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:05:32.105375 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 30 00:05:32.105392 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 30 00:05:32.105401 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 30 00:05:32.105410 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:05:32.103405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:05:32.110650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:05:32.127868 systemd-networkd[868]: lo: Link UP Oct 30 00:05:32.127875 systemd-networkd[868]: lo: Gained carrier Oct 30 00:05:32.137417 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 30 00:05:32.137565 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 30 00:05:32.137667 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff89f eth0: Data path switched to VF: enP30832s1 Oct 30 00:05:32.128754 systemd-networkd[868]: Enumeration completed Oct 30 00:05:32.129079 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:05:32.129082 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:05:32.129333 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:05:32.135421 systemd[1]: Reached target network.target - Network. Oct 30 00:05:32.137126 systemd-networkd[868]: enP30832s1: Link UP Oct 30 00:05:32.137187 systemd-networkd[868]: eth0: Link UP Oct 30 00:05:32.137300 systemd-networkd[868]: eth0: Gained carrier Oct 30 00:05:32.137310 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:05:32.146841 systemd-networkd[868]: enP30832s1: Gained carrier Oct 30 00:05:32.151394 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 30 00:05:33.187812 ignition[873]: Ignition 2.22.0 Oct 30 00:05:33.187822 ignition[873]: Stage: fetch-offline Oct 30 00:05:33.190004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:05:33.187923 ignition[873]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:33.194060 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 00:05:33.187930 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:33.188022 ignition[873]: parsed url from cmdline: "" Oct 30 00:05:33.188025 ignition[873]: no config URL provided Oct 30 00:05:33.188028 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:05:33.188039 ignition[873]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:05:33.188044 ignition[873]: failed to fetch config: resource requires networking Oct 30 00:05:33.189057 ignition[873]: Ignition finished successfully Oct 30 00:05:33.217388 ignition[885]: Ignition 2.22.0 Oct 30 00:05:33.217397 ignition[885]: Stage: fetch Oct 30 00:05:33.217570 ignition[885]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:33.217579 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:33.217639 ignition[885]: parsed url from cmdline: "" Oct 30 00:05:33.217642 ignition[885]: no config URL provided Oct 30 00:05:33.217645 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:05:33.217649 ignition[885]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:05:33.217668 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 30 00:05:33.278839 ignition[885]: GET result: OK Oct 30 00:05:33.278908 ignition[885]: config has been read from IMDS userdata Oct 30 00:05:33.278937 ignition[885]: parsing config with SHA512: 1836d4abcf2135cce43f3457c6de54d9a6276aa85eb3f9948f572dd40ca0d48c7e34e76e23778bad0a22a4ac426b5f01b262bef22aa73c524b377c0c02104f03 Oct 30 00:05:33.282474 unknown[885]: fetched base config from "system" Oct 30 00:05:33.282489 unknown[885]: fetched base config from "system" Oct 30 00:05:33.282911 ignition[885]: fetch: fetch complete Oct 30 00:05:33.282494 unknown[885]: fetched user config from "azure" Oct 30 00:05:33.282915 ignition[885]: fetch: fetch passed Oct 30 00:05:33.284693 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 00:05:33.282946 ignition[885]: Ignition finished successfully Oct 30 00:05:33.288128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:05:33.314718 ignition[891]: Ignition 2.22.0 Oct 30 00:05:33.314727 ignition[891]: Stage: kargs Oct 30 00:05:33.314898 ignition[891]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:33.317497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:05:33.314904 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:33.318879 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:05:33.315716 ignition[891]: kargs: kargs passed Oct 30 00:05:33.315750 ignition[891]: Ignition finished successfully Oct 30 00:05:33.338361 ignition[897]: Ignition 2.22.0 Oct 30 00:05:33.338369 ignition[897]: Stage: disks Oct 30 00:05:33.340301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:05:33.338528 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:33.344606 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:05:33.338534 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:33.348325 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:05:33.339487 ignition[897]: disks: disks passed Oct 30 00:05:33.351070 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:05:33.339520 ignition[897]: Ignition finished successfully Oct 30 00:05:33.354322 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:05:33.354591 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:05:33.355195 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:05:33.444479 systemd-fsck[905]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Oct 30 00:05:33.448415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:05:33.451764 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:05:33.681360 systemd-networkd[868]: eth0: Gained IPv6LL Oct 30 00:05:35.431424 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 02607114-2ead-44bc-a76e-2d51f82b108e r/w with ordered data mode. Quota mode: none. Oct 30 00:05:35.432035 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:05:35.435109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:05:35.466847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:05:35.484348 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:05:35.488009 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 30 00:05:35.492767 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:05:35.492796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:05:35.505252 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (914) Oct 30 00:05:35.498210 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:05:35.510473 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:05:35.510494 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:35.503264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:05:35.517065 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 30 00:05:35.517102 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 30 00:05:35.518453 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 30 00:05:35.519934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:05:36.024465 coreos-metadata[916]: Oct 30 00:05:36.024 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 30 00:05:36.027271 coreos-metadata[916]: Oct 30 00:05:36.027 INFO Fetch successful Oct 30 00:05:36.028614 coreos-metadata[916]: Oct 30 00:05:36.027 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 30 00:05:36.033891 coreos-metadata[916]: Oct 30 00:05:36.033 INFO Fetch successful Oct 30 00:05:36.047652 coreos-metadata[916]: Oct 30 00:05:36.047 INFO wrote hostname ci-4459.1.0-n-666d628454 to /sysroot/etc/hostname Oct 30 00:05:36.049500 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:05:36.272147 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:05:36.318880 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:05:36.351827 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:05:36.369289 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:05:37.517586 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:05:37.524236 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:05:37.534375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:05:37.541638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:05:37.543344 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:05:37.565159 ignition[1033]: INFO : Ignition 2.22.0 Oct 30 00:05:37.565159 ignition[1033]: INFO : Stage: mount Oct 30 00:05:37.568505 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:37.568505 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:37.568505 ignition[1033]: INFO : mount: mount passed Oct 30 00:05:37.568505 ignition[1033]: INFO : Ignition finished successfully Oct 30 00:05:37.566957 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:05:37.571455 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:05:37.574224 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:05:37.593901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:05:37.624296 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1047) Oct 30 00:05:37.627244 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:05:37.627270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:37.632860 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 30 00:05:37.632894 kernel: BTRFS info (device nvme0n1p6): turning on async discard Oct 30 00:05:37.633855 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 30 00:05:37.635712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:05:37.664040 ignition[1064]: INFO : Ignition 2.22.0 Oct 30 00:05:37.664040 ignition[1064]: INFO : Stage: files Oct 30 00:05:37.667372 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:37.667372 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:37.667372 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:05:37.716312 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:05:37.716312 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:05:37.757006 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:05:37.760348 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:05:37.760348 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:05:37.759145 unknown[1064]: wrote ssh authorized keys file for user: core Oct 30 00:05:37.826690 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:05:37.831343 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 00:05:37.881375 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:05:37.940886 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:05:37.943844 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:05:37.948346 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:37.975005 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 00:05:38.327517 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:05:40.385383 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:40.385383 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:05:40.414011 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:05:40.425765 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:05:40.425765 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:05:40.425765 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:05:40.429378 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:05:40.429378 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:05:40.429378 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:05:40.429378 ignition[1064]: INFO : files: files passed Oct 30 00:05:40.429378 ignition[1064]: INFO : Ignition finished successfully Oct 30 00:05:40.427379 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:05:40.445359 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:05:40.451163 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:05:40.457997 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:05:40.458085 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:05:40.477231 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:40.479770 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:40.479770 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:40.482202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:05:40.487809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:05:40.490083 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:05:40.519683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:05:40.519754 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:05:40.524255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:05:40.526030 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:05:40.530175 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:05:40.532679 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:05:40.554396 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:05:40.555312 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:05:40.572029 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:40.573625 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:40.577307 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:05:40.582797 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:05:40.584258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:05:40.588526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:05:40.591422 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:05:40.592218 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:05:40.592728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:05:40.593260 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:05:40.593596 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:05:40.593895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:05:40.594172 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:05:40.594800 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:05:40.595028 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:05:40.595336 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:05:40.595584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:05:40.595686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:05:40.596212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:40.596524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:40.596754 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:05:40.607292 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:40.635426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:05:40.635540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:05:40.639061 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:05:40.639183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:05:40.642202 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:05:40.642326 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:05:40.647431 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 30 00:05:40.647532 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:05:40.650372 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:05:40.650649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:05:40.650765 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:40.652773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:05:40.661101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:05:40.661246 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:40.662164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:05:40.662258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:05:40.664917 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:05:40.669359 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:05:40.695846 ignition[1118]: INFO : Ignition 2.22.0 Oct 30 00:05:40.695846 ignition[1118]: INFO : Stage: umount Oct 30 00:05:40.695846 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:40.695846 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 00:05:40.702344 ignition[1118]: INFO : umount: umount passed Oct 30 00:05:40.702344 ignition[1118]: INFO : Ignition finished successfully Oct 30 00:05:40.701732 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:05:40.701807 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:05:40.705920 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:05:40.705951 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:05:40.708901 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:05:40.708936 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:05:40.710421 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 00:05:40.710456 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 00:05:40.710502 systemd[1]: Stopped target network.target - Network. Oct 30 00:05:40.710525 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:05:40.710553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:05:40.710814 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:05:40.710833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:05:40.716116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:40.724415 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:05:40.726495 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:05:40.743341 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:05:40.743379 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:05:40.747336 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:05:40.747364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:05:40.749584 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:05:40.749626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:05:40.752151 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:05:40.752184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:05:40.752669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:05:40.760705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:05:40.771636 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:05:40.771720 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:05:40.776859 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 00:05:40.777013 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:05:40.777103 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:05:40.782498 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 00:05:40.782905 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:05:40.786358 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:05:40.786389 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:40.790872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:05:40.793918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:05:40.793966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:05:40.797631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:05:40.797669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:40.813411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:05:40.813456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:40.817382 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:05:40.817430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:40.822583 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:40.833102 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 00:05:40.833158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:05:40.835446 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:05:40.836701 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:05:40.836829 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:40.867347 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff89f eth0: Data path switched from VF: enP30832s1 Oct 30 00:05:40.867540 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 30 00:05:40.852018 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:05:40.852075 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:40.853272 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:05:40.853461 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:40.853587 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:05:40.853621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:05:40.853939 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:05:40.853968 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:05:40.854180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:05:40.854206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:05:40.861112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:05:40.875355 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:05:40.879232 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:40.886915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:05:40.886962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:40.891768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:40.892754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:40.899784 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 30 00:05:40.899819 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 00:05:40.899844 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:05:40.900058 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:05:40.900119 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:05:40.913295 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:05:40.914719 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:05:40.998448 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:05:40.998562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:05:41.002514 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:05:41.006355 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:05:41.006406 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:05:41.010866 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:05:41.042230 systemd[1]: Switching root. Oct 30 00:05:41.130813 systemd-journald[187]: Journal stopped Oct 30 00:05:48.279956 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 30 00:05:48.279983 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:05:48.279996 kernel: SELinux: policy capability open_perms=1 Oct 30 00:05:48.280005 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:05:48.280014 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:05:48.280022 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:05:48.280032 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:05:48.280041 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:05:48.280051 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:05:48.280060 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:05:48.280069 kernel: audit: type=1403 audit(1761782742.328:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:05:48.280079 systemd[1]: Successfully loaded SELinux policy in 215.741ms. Oct 30 00:05:48.280090 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.690ms. Oct 30 00:05:48.280102 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:05:48.280113 systemd[1]: Detected virtualization microsoft. Oct 30 00:05:48.280123 systemd[1]: Detected architecture x86-64. Oct 30 00:05:48.280132 systemd[1]: Detected first boot. Oct 30 00:05:48.280142 systemd[1]: Hostname set to . Oct 30 00:05:48.280151 systemd[1]: Initializing machine ID from random generator. Oct 30 00:05:48.280161 zram_generator::config[1162]: No configuration found. Oct 30 00:05:48.280172 kernel: Guest personality initialized and is inactive Oct 30 00:05:48.280182 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Oct 30 00:05:48.280190 kernel: Initialized host personality Oct 30 00:05:48.280206 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:05:48.280216 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:05:48.280226 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 00:05:48.280236 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:05:48.280247 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:05:48.280256 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:05:48.280266 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:05:48.280297 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:05:48.280307 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:05:48.280316 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:05:48.280326 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:05:48.280338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:05:48.280347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:05:48.280357 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:05:48.280366 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:48.280376 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:48.280386 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:05:48.280399 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:05:48.280409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:05:48.280419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:05:48.280430 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:05:48.280440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:48.280451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:48.280462 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:05:48.280472 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:05:48.280482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:05:48.280492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:05:48.280504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:48.280515 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:05:48.280525 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:05:48.280535 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:05:48.280546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:05:48.280556 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:05:48.280567 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:05:48.280578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:48.280589 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:48.280599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:48.280610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:05:48.280620 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:05:48.280630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:05:48.280642 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:05:48.280652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:48.280663 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:05:48.280674 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:05:48.280684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:05:48.280695 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:05:48.280706 systemd[1]: Reached target machines.target - Containers. Oct 30 00:05:48.280716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:05:48.280728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:48.280739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:05:48.280749 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:05:48.280761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:48.280771 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:05:48.280782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:05:48.280792 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:05:48.280802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:05:48.280872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:05:48.280883 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:05:48.280893 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:05:48.280901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:05:48.280911 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:05:48.280921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:48.280930 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:05:48.280940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:05:48.280949 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:05:48.280960 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:05:48.280970 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:05:48.280979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:05:48.280988 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 00:05:48.280997 systemd[1]: Stopped verity-setup.service. Oct 30 00:05:48.281006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:48.281016 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:05:48.281025 kernel: fuse: init (API version 7.41) Oct 30 00:05:48.281035 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:05:48.281044 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:05:48.281052 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:05:48.281061 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:05:48.281070 kernel: loop: module loaded Oct 30 00:05:48.281079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:05:48.281087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:48.281096 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:05:48.281105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:05:48.281137 systemd-journald[1245]: Collecting audit messages is disabled. Oct 30 00:05:48.281158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:48.281166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:48.281177 systemd-journald[1245]: Journal started Oct 30 00:05:48.281200 systemd-journald[1245]: Runtime Journal (/run/log/journal/ae9d89fb62a442329e68605dd76e4ac4) is 8M, max 158.6M, 150.6M free. Oct 30 00:05:47.740840 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:05:48.286488 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:05:47.749648 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 30 00:05:47.749936 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:05:48.288938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:05:48.289075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:05:48.292113 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:05:48.292288 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:05:48.294094 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:05:48.294210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:05:48.297649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:48.299530 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:48.302809 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:05:48.315085 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:05:48.317624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:05:48.321248 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:05:48.325010 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:05:48.330345 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:05:48.332835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:05:48.332861 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:05:48.336183 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:05:48.341371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:05:48.366045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:48.394290 kernel: ACPI: bus type drm_connector registered Oct 30 00:05:48.394409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:05:48.403628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:05:48.406431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:05:48.407376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:05:48.409700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:05:48.410505 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:05:48.414081 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:05:48.418377 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:05:48.421653 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:05:48.421837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:05:48.424042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:48.427899 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:05:48.430521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:05:48.447029 systemd-journald[1245]: Time spent on flushing to /var/log/journal/ae9d89fb62a442329e68605dd76e4ac4 is 46.763ms for 991 entries. Oct 30 00:05:48.447029 systemd-journald[1245]: System Journal (/var/log/journal/ae9d89fb62a442329e68605dd76e4ac4) is 11.8M, max 2.6G, 2.6G free. Oct 30 00:05:48.773224 systemd-journald[1245]: Received client request to flush runtime journal. Oct 30 00:05:48.773259 systemd-journald[1245]: /var/log/journal/ae9d89fb62a442329e68605dd76e4ac4/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Oct 30 00:05:48.773299 systemd-journald[1245]: Rotating system journal. Oct 30 00:05:48.773317 kernel: loop0: detected capacity change from 0 to 128016 Oct 30 00:05:48.458234 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:05:48.462436 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:05:48.464104 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:05:48.549919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:48.763609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:05:48.764269 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:05:48.773917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:05:49.072366 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:05:49.121299 kernel: loop1: detected capacity change from 0 to 27936 Oct 30 00:05:49.225303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:05:49.229098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:05:49.242051 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:05:49.381388 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Oct 30 00:05:49.381404 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Oct 30 00:05:49.396463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:49.399168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:49.417998 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Oct 30 00:05:49.664298 kernel: loop2: detected capacity change from 0 to 110984 Oct 30 00:05:50.089351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:50.095182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:05:50.116295 kernel: loop3: detected capacity change from 0 to 229808 Oct 30 00:05:50.142366 kernel: loop4: detected capacity change from 0 to 128016 Oct 30 00:05:50.146782 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:05:50.153306 kernel: loop5: detected capacity change from 0 to 27936 Oct 30 00:05:50.161567 kernel: loop6: detected capacity change from 0 to 110984 Oct 30 00:05:50.165096 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:05:50.173299 kernel: loop7: detected capacity change from 0 to 229808 Oct 30 00:05:50.197519 (sd-merge)[1360]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Oct 30 00:05:50.198706 (sd-merge)[1360]: Merged extensions into '/usr'. Oct 30 00:05:50.212966 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:05:50.213049 systemd[1]: Reloading... Oct 30 00:05:50.245496 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:05:50.248314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 30 00:05:50.254287 kernel: hv_vmbus: registering driver hyperv_fb Oct 30 00:05:50.312311 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 30 00:05:50.316296 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 30 00:05:50.328618 zram_generator::config[1411]: No configuration found. Oct 30 00:05:50.360291 kernel: hv_vmbus: registering driver hv_balloon Oct 30 00:05:50.376514 kernel: Console: switching to colour dummy device 80x25 Oct 30 00:05:50.380319 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 00:05:50.421306 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 30 00:05:50.638225 systemd-networkd[1334]: lo: Link UP Oct 30 00:05:50.638232 systemd-networkd[1334]: lo: Gained carrier Oct 30 00:05:50.640332 systemd-networkd[1334]: Enumeration completed Oct 30 00:05:50.640600 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:05:50.640602 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:05:50.643334 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Oct 30 00:05:50.653286 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Oct 30 00:05:50.653474 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521ff89f eth0: Data path switched to VF: enP30832s1 Oct 30 00:05:50.653421 systemd-networkd[1334]: enP30832s1: Link UP Oct 30 00:05:50.653489 systemd-networkd[1334]: eth0: Link UP Oct 30 00:05:50.653491 systemd-networkd[1334]: eth0: Gained carrier Oct 30 00:05:50.653506 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:05:50.659470 systemd-networkd[1334]: enP30832s1: Gained carrier Oct 30 00:05:50.665316 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 30 00:05:50.669046 systemd[1]: Reloading finished in 455 ms. Oct 30 00:05:50.681288 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Oct 30 00:05:50.693116 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:05:50.694649 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:05:50.696246 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:05:50.717378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Oct 30 00:05:50.735914 systemd[1]: Starting ensure-sysext.service... Oct 30 00:05:50.737564 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:05:50.748381 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:05:50.751626 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:05:50.755022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:05:50.761992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:50.774221 systemd[1]: Reload requested from client PID 1497 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:05:50.774228 systemd[1]: Reloading... Oct 30 00:05:50.774956 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:05:50.774976 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:05:50.775148 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:05:50.775378 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:05:50.776551 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:05:50.776756 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Oct 30 00:05:50.776796 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Oct 30 00:05:50.829303 zram_generator::config[1533]: No configuration found. Oct 30 00:05:50.833233 systemd-tmpfiles[1501]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:05:50.833243 systemd-tmpfiles[1501]: Skipping /boot Oct 30 00:05:50.840168 systemd-tmpfiles[1501]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:05:50.840241 systemd-tmpfiles[1501]: Skipping /boot Oct 30 00:05:50.989180 systemd[1]: Reloading finished in 214 ms. Oct 30 00:05:51.014531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:05:51.017030 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:05:51.019356 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:51.025931 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:05:51.065562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:05:51.068460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:05:51.079455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:05:51.083471 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:05:51.087266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.087427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:51.089512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:51.094368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:05:51.097300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:05:51.097414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:51.097497 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:51.097569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.099872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.101480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:51.101610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:51.101678 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:51.101747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.104540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:51.104701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:51.111637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:05:51.111777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:05:51.114071 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:05:51.114437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:05:51.116905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.117755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:51.119991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:51.122349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:05:51.122491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:51.122577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:51.122660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:05:51.122768 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:05:51.123119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:51.129315 systemd[1]: Finished ensure-sysext.service. Oct 30 00:05:51.132429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:05:51.139906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:51.140038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:51.143199 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:05:51.143817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:05:51.146886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:05:51.193324 systemd-resolved[1604]: Positive Trust Anchors: Oct 30 00:05:51.193479 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:05:51.193523 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:05:51.209051 systemd-resolved[1604]: Using system hostname 'ci-4459.1.0-n-666d628454'. Oct 30 00:05:51.209912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:05:51.211495 systemd[1]: Reached target network.target - Network. Oct 30 00:05:51.211549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:51.277219 augenrules[1637]: No rules Oct 30 00:05:51.277857 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:05:51.278000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:05:51.392045 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:05:52.065482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:52.369374 systemd-networkd[1334]: eth0: Gained IPv6LL Oct 30 00:05:52.371015 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:05:52.374473 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:05:53.497603 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:05:53.499666 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:05:56.930672 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:05:56.970126 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:05:56.972909 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:05:57.001607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:05:57.003398 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:05:57.004822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:05:57.006408 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:05:57.009343 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:05:57.011014 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:05:57.014407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:05:57.015881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:05:57.019332 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:05:57.019365 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:05:57.020451 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:05:57.037645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:05:57.039826 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:05:57.043622 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:05:57.047425 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:05:57.049091 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:05:57.053479 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:05:57.056534 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:05:57.059762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:05:57.062964 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:05:57.063983 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:05:57.066352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:05:57.066376 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:05:57.080472 systemd[1]: Starting chronyd.service - NTP client/server... Oct 30 00:05:57.084355 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:05:57.095379 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 00:05:57.100390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:05:57.103099 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:05:57.107089 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:05:57.114638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:05:57.116250 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:05:57.117138 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:05:57.119240 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Oct 30 00:05:57.120458 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 30 00:05:57.122444 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 30 00:05:57.127375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:05:57.131737 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:05:57.138419 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:05:57.142577 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:05:57.148503 jq[1662]: false Oct 30 00:05:57.148817 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:05:57.155386 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:05:57.159567 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:05:57.161119 KVP[1665]: KVP starting; pid is:1665 Oct 30 00:05:57.162482 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 00:05:57.162858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:05:57.165422 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:05:57.169739 kernel: hv_utils: KVP IC version 4.0 Oct 30 00:05:57.169801 KVP[1665]: KVP LIC Version: 3.1 Oct 30 00:05:57.171143 chronyd[1654]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 30 00:05:57.172348 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:05:57.177313 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:05:57.181623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:05:57.182478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:05:57.200492 jq[1674]: true Oct 30 00:05:57.201798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:05:57.202997 extend-filesystems[1663]: Found /dev/nvme0n1p6 Oct 30 00:05:57.202924 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:05:57.206125 oslogin_cache_refresh[1664]: Refreshing passwd entry cache Oct 30 00:05:57.209727 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Refreshing passwd entry cache Oct 30 00:05:57.220072 chronyd[1654]: Timezone right/UTC failed leap second check, ignoring Oct 30 00:05:57.220246 systemd[1]: Started chronyd.service - NTP client/server. Oct 30 00:05:57.220188 chronyd[1654]: Loaded seccomp filter (level 2) Oct 30 00:05:57.229434 extend-filesystems[1663]: Found /dev/nvme0n1p9 Oct 30 00:05:57.232492 jq[1693]: true Oct 30 00:05:57.232405 oslogin_cache_refresh[1664]: Failure getting users, quitting Oct 30 00:05:57.232713 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Failure getting users, quitting Oct 30 00:05:57.232713 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:05:57.232713 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Refreshing group entry cache Oct 30 00:05:57.232419 oslogin_cache_refresh[1664]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:05:57.232455 oslogin_cache_refresh[1664]: Refreshing group entry cache Oct 30 00:05:57.233146 (ntainerd)[1697]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:05:57.233696 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:05:57.233875 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:05:57.245292 extend-filesystems[1663]: Checking size of /dev/nvme0n1p9 Oct 30 00:05:57.260869 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Failure getting groups, quitting Oct 30 00:05:57.260869 google_oslogin_nss_cache[1664]: oslogin_cache_refresh[1664]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:05:57.260865 oslogin_cache_refresh[1664]: Failure getting groups, quitting Oct 30 00:05:57.260874 oslogin_cache_refresh[1664]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:05:57.262012 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:05:57.262206 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:05:57.266241 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:05:57.275042 update_engine[1673]: I20251030 00:05:57.274756 1673 main.cc:92] Flatcar Update Engine starting Oct 30 00:05:57.286177 tar[1682]: linux-amd64/LICENSE Oct 30 00:05:57.286397 tar[1682]: linux-amd64/helm Oct 30 00:05:57.314913 systemd-logind[1672]: New seat seat0. Oct 30 00:05:57.319008 systemd-logind[1672]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:05:57.319119 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:05:57.351130 bash[1727]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:05:57.352472 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:05:57.355476 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 00:05:57.395125 extend-filesystems[1663]: Old size kept for /dev/nvme0n1p9 Oct 30 00:05:57.397737 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:05:57.398084 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:05:57.475792 sshd_keygen[1708]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:05:57.547315 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:05:57.560391 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:05:57.563915 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 30 00:05:57.586105 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:05:57.587444 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:05:57.590247 dbus-daemon[1657]: [system] SELinux support is enabled Oct 30 00:05:57.591723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:05:57.597896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:05:57.597918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:05:57.598523 update_engine[1673]: I20251030 00:05:57.598373 1673 update_check_scheduler.cc:74] Next update check in 11m30s Oct 30 00:05:57.603481 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:05:57.605934 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:05:57.605960 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:05:57.609774 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:05:57.612777 dbus-daemon[1657]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 30 00:05:57.616475 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:05:57.620877 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 30 00:05:57.643140 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:05:57.647492 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:05:57.652021 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:05:57.654624 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:05:57.673443 coreos-metadata[1656]: Oct 30 00:05:57.673 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 30 00:05:57.675230 coreos-metadata[1656]: Oct 30 00:05:57.675 INFO Fetch successful Oct 30 00:05:57.675724 coreos-metadata[1656]: Oct 30 00:05:57.675 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 30 00:05:57.680341 coreos-metadata[1656]: Oct 30 00:05:57.679 INFO Fetch successful Oct 30 00:05:57.680341 coreos-metadata[1656]: Oct 30 00:05:57.680 INFO Fetching http://168.63.129.16/machine/370ff62f-0fe0-4d75-90ab-c19f1ef04204/994427fc%2Dba4f%2D4eb5%2Dac02%2Db2145c3cedc9.%5Fci%2D4459.1.0%2Dn%2D666d628454?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 30 00:05:57.683601 coreos-metadata[1656]: Oct 30 00:05:57.683 INFO Fetch successful Oct 30 00:05:57.683601 coreos-metadata[1656]: Oct 30 00:05:57.683 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 30 00:05:57.693429 coreos-metadata[1656]: Oct 30 00:05:57.691 INFO Fetch successful Oct 30 00:05:57.738031 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 00:05:57.740410 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:05:57.803105 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:05:57.818471 tar[1682]: linux-amd64/README.md Oct 30 00:05:57.834909 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:05:58.133546 containerd[1697]: time="2025-10-30T00:05:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:05:58.134130 containerd[1697]: time="2025-10-30T00:05:58.134103149Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141155340Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.328µs" Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141177281Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141194181Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141306714Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141317057Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141334026Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141379724Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141388092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141566985Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141575955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141583781Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:05:58.141922 containerd[1697]: time="2025-10-30T00:05:58.141590919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:05:58.142168 containerd[1697]: time="2025-10-30T00:05:58.141645715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:05:58.142168 containerd[1697]: time="2025-10-30T00:05:58.142078154Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:05:58.142168 containerd[1697]: time="2025-10-30T00:05:58.142105800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:05:58.142168 containerd[1697]: time="2025-10-30T00:05:58.142114517Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:05:58.142168 containerd[1697]: time="2025-10-30T00:05:58.142140622Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:05:58.143027 containerd[1697]: time="2025-10-30T00:05:58.142363157Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:05:58.143027 containerd[1697]: time="2025-10-30T00:05:58.142410901Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157489236Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157523606Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157533455Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157541441Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157549186Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157560450Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157570202Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157581437Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157590841Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157596972Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157603404Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157611674Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157685424Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:05:58.158294 containerd[1697]: time="2025-10-30T00:05:58.157696048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157706217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157723285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157734921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157744210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157753421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157762336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157771506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157780549Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157789092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157842131Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157852798Z" level=info msg="Start snapshots syncer" Oct 30 00:05:58.158560 containerd[1697]: time="2025-10-30T00:05:58.157869333Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:05:58.158693 containerd[1697]: time="2025-10-30T00:05:58.158068625Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:05:58.158693 containerd[1697]: time="2025-10-30T00:05:58.158108733Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158152812Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158217757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158239062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158248740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158258246Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:05:58.158778 containerd[1697]: time="2025-10-30T00:05:58.158268303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:05:58.158958 containerd[1697]: time="2025-10-30T00:05:58.158947637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:05:58.158997 containerd[1697]: time="2025-10-30T00:05:58.158991206Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:05:58.159031 containerd[1697]: time="2025-10-30T00:05:58.159026518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:05:58.159052 containerd[1697]: time="2025-10-30T00:05:58.159048007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:05:58.159072 containerd[1697]: time="2025-10-30T00:05:58.159068732Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:05:58.159347 containerd[1697]: time="2025-10-30T00:05:58.159329614Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:05:58.160139 containerd[1697]: time="2025-10-30T00:05:58.160119147Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:05:58.160179 containerd[1697]: time="2025-10-30T00:05:58.160173188Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:05:58.160204 containerd[1697]: time="2025-10-30T00:05:58.160199431Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:05:58.160225 containerd[1697]: time="2025-10-30T00:05:58.160218868Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:05:58.160248 containerd[1697]: time="2025-10-30T00:05:58.160244401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:05:58.160298 containerd[1697]: time="2025-10-30T00:05:58.160290923Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:05:58.160346 containerd[1697]: time="2025-10-30T00:05:58.160339081Z" level=info msg="runtime interface created" Oct 30 00:05:58.160375 containerd[1697]: time="2025-10-30T00:05:58.160370419Z" level=info msg="created NRI interface" Oct 30 00:05:58.160435 containerd[1697]: time="2025-10-30T00:05:58.160428956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:05:58.160970 containerd[1697]: time="2025-10-30T00:05:58.160465964Z" level=info msg="Connect containerd service" Oct 30 00:05:58.160970 containerd[1697]: time="2025-10-30T00:05:58.160502624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:05:58.161911 containerd[1697]: time="2025-10-30T00:05:58.161877034Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:05:58.228916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:05:58.415516 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:05:58.907820 kubelet[1804]: E1030 00:05:58.907769 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:05:58.909400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:05:58.909527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:05:58.909835 systemd[1]: kubelet.service: Consumed 809ms CPU time, 267.1M memory peak. Oct 30 00:05:58.982771 containerd[1697]: time="2025-10-30T00:05:58.982708831Z" level=info msg="Start subscribing containerd event" Oct 30 00:05:58.982862 containerd[1697]: time="2025-10-30T00:05:58.982752621Z" level=info msg="Start recovering state" Oct 30 00:05:58.982918 containerd[1697]: time="2025-10-30T00:05:58.982904717Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:05:58.982947 containerd[1697]: time="2025-10-30T00:05:58.982941542Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:05:58.983031 containerd[1697]: time="2025-10-30T00:05:58.983023435Z" level=info msg="Start event monitor" Oct 30 00:05:58.983076 containerd[1697]: time="2025-10-30T00:05:58.983070153Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:05:58.983137 containerd[1697]: time="2025-10-30T00:05:58.983100683Z" level=info msg="Start streaming server" Oct 30 00:05:58.983137 containerd[1697]: time="2025-10-30T00:05:58.983114919Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:05:58.983137 containerd[1697]: time="2025-10-30T00:05:58.983121957Z" level=info msg="runtime interface starting up..." Oct 30 00:05:58.983285 containerd[1697]: time="2025-10-30T00:05:58.983127882Z" level=info msg="starting plugins..." Oct 30 00:05:58.983285 containerd[1697]: time="2025-10-30T00:05:58.983207461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:05:58.983615 containerd[1697]: time="2025-10-30T00:05:58.983417999Z" level=info msg="containerd successfully booted in 0.850169s" Oct 30 00:05:58.983497 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:05:58.985938 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:05:58.988370 systemd[1]: Startup finished in 2.668s (kernel) + 13.654s (initrd) + 16.874s (userspace) = 33.197s. Oct 30 00:05:59.684208 waagent[1775]: 2025-10-30T00:05:59.684146Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 30 00:05:59.685750 waagent[1775]: 2025-10-30T00:05:59.685673Z INFO Daemon Daemon OS: flatcar 4459.1.0 Oct 30 00:05:59.686706 waagent[1775]: 2025-10-30T00:05:59.686641Z INFO Daemon Daemon Python: 3.11.13 Oct 30 00:05:59.687803 waagent[1775]: 2025-10-30T00:05:59.687758Z INFO Daemon Daemon Run daemon Oct 30 00:05:59.688996 waagent[1775]: 2025-10-30T00:05:59.688962Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Oct 30 00:05:59.690553 waagent[1775]: 2025-10-30T00:05:59.690523Z INFO Daemon Daemon Using waagent for provisioning Oct 30 00:05:59.691956 waagent[1775]: 2025-10-30T00:05:59.691927Z INFO Daemon Daemon Activate resource disk Oct 30 00:05:59.692385 waagent[1775]: 2025-10-30T00:05:59.692355Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 30 00:05:59.696047 waagent[1775]: 2025-10-30T00:05:59.695820Z INFO Daemon Daemon Found device: None Oct 30 00:05:59.697048 waagent[1775]: 2025-10-30T00:05:59.696971Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 30 00:05:59.699044 waagent[1775]: 2025-10-30T00:05:59.698953Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 30 00:05:59.701955 waagent[1775]: 2025-10-30T00:05:59.701911Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 30 00:05:59.702444 waagent[1775]: 2025-10-30T00:05:59.702418Z INFO Daemon Daemon Running default provisioning handler Oct 30 00:05:59.708920 waagent[1775]: 2025-10-30T00:05:59.708721Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 30 00:05:59.712263 waagent[1775]: 2025-10-30T00:05:59.712219Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 30 00:05:59.714720 waagent[1775]: 2025-10-30T00:05:59.712388Z INFO Daemon Daemon cloud-init is enabled: False Oct 30 00:05:59.714720 waagent[1775]: 2025-10-30T00:05:59.712641Z INFO Daemon Daemon Copying ovf-env.xml Oct 30 00:05:59.722708 login[1779]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 30 00:05:59.736902 login[1778]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 30 00:05:59.741440 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:05:59.742446 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:05:59.748146 systemd-logind[1672]: New session 1 of user core. Oct 30 00:05:59.769524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:05:59.771656 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:05:59.796559 (systemd)[1835]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:05:59.798063 systemd-logind[1672]: New session c1 of user core. Oct 30 00:05:59.829156 waagent[1775]: 2025-10-30T00:05:59.829116Z INFO Daemon Daemon Successfully mounted dvd Oct 30 00:05:59.853585 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.853973Z INFO Daemon Daemon Detect protocol endpoint Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.854114Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.854761Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.854979Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.855103Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 30 00:05:59.855651 waagent[1775]: 2025-10-30T00:05:59.855331Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 30 00:05:59.870581 waagent[1775]: 2025-10-30T00:05:59.870552Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 30 00:05:59.870879 waagent[1775]: 2025-10-30T00:05:59.870799Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 30 00:05:59.871714 waagent[1775]: 2025-10-30T00:05:59.870878Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 30 00:05:59.982200 waagent[1775]: 2025-10-30T00:05:59.982157Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 30 00:05:59.983151 waagent[1775]: 2025-10-30T00:05:59.982515Z INFO Daemon Daemon Forcing an update of the goal state. Oct 30 00:05:59.986143 waagent[1775]: 2025-10-30T00:05:59.986110Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 30 00:06:00.005435 waagent[1775]: 2025-10-30T00:06:00.005410Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 30 00:06:00.007514 waagent[1775]: 2025-10-30T00:06:00.005843Z INFO Daemon Oct 30 00:06:00.007514 waagent[1775]: 2025-10-30T00:06:00.006124Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6170dd26-364b-4fef-9464-a0221c2ecd84 eTag: 9177463258849795832 source: Fabric] Oct 30 00:06:00.007514 waagent[1775]: 2025-10-30T00:06:00.006369Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 30 00:06:00.007514 waagent[1775]: 2025-10-30T00:06:00.006646Z INFO Daemon Oct 30 00:06:00.007514 waagent[1775]: 2025-10-30T00:06:00.006850Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 30 00:06:00.016021 waagent[1775]: 2025-10-30T00:06:00.009597Z INFO Daemon Daemon Downloading artifacts profile blob Oct 30 00:06:00.083915 waagent[1775]: 2025-10-30T00:06:00.082619Z INFO Daemon Downloaded certificate {'thumbprint': '57736B0BCA5D5F00E174D4FA9CB70E01463F1B0B', 'hasPrivateKey': True} Oct 30 00:06:00.083915 waagent[1775]: 2025-10-30T00:06:00.086332Z INFO Daemon Fetch goal state completed Oct 30 00:06:00.093311 waagent[1775]: 2025-10-30T00:06:00.093285Z INFO Daemon Daemon Starting provisioning Oct 30 00:06:00.094907 waagent[1775]: 2025-10-30T00:06:00.094828Z INFO Daemon Daemon Handle ovf-env.xml. Oct 30 00:06:00.096500 waagent[1775]: 2025-10-30T00:06:00.096200Z INFO Daemon Daemon Set hostname [ci-4459.1.0-n-666d628454] Oct 30 00:06:00.130041 waagent[1775]: 2025-10-30T00:06:00.130007Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-n-666d628454] Oct 30 00:06:00.131230 waagent[1775]: 2025-10-30T00:06:00.130239Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 30 00:06:00.131230 waagent[1775]: 2025-10-30T00:06:00.130485Z INFO Daemon Daemon Primary interface is [eth0] Oct 30 00:06:00.139233 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:06:00.140981 waagent[1775]: 2025-10-30T00:06:00.139817Z INFO Daemon Daemon Create user account if not exists Oct 30 00:06:00.139874 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:06:00.139896 systemd-networkd[1334]: eth0: DHCP lease lost Oct 30 00:06:00.140693 systemd[1835]: Queued start job for default target default.target. Oct 30 00:06:00.141929 waagent[1775]: 2025-10-30T00:06:00.141831Z INFO Daemon Daemon User core already exists, skip useradd Oct 30 00:06:00.141929 waagent[1775]: 2025-10-30T00:06:00.141947Z INFO Daemon Daemon Configure sudoer Oct 30 00:06:00.145245 systemd[1835]: Created slice app.slice - User Application Slice. Oct 30 00:06:00.145289 systemd[1835]: Reached target paths.target - Paths. Oct 30 00:06:00.145321 systemd[1835]: Reached target timers.target - Timers. Oct 30 00:06:00.145985 systemd[1835]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:06:00.148776 waagent[1775]: 2025-10-30T00:06:00.148730Z INFO Daemon Daemon Configure sshd Oct 30 00:06:00.153496 waagent[1775]: 2025-10-30T00:06:00.153460Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 30 00:06:00.157984 waagent[1775]: 2025-10-30T00:06:00.153588Z INFO Daemon Daemon Deploy ssh public key. Oct 30 00:06:00.156246 systemd[1835]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:06:00.156385 systemd[1835]: Reached target sockets.target - Sockets. Oct 30 00:06:00.156487 systemd[1835]: Reached target basic.target - Basic System. Oct 30 00:06:00.156540 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:06:00.157583 systemd[1835]: Reached target default.target - Main User Target. Oct 30 00:06:00.157606 systemd[1835]: Startup finished in 355ms. Oct 30 00:06:00.166329 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 30 00:06:00.166433 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:06:00.724052 login[1779]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 30 00:06:00.727488 systemd-logind[1672]: New session 2 of user core. Oct 30 00:06:00.734372 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:06:01.290873 waagent[1775]: 2025-10-30T00:06:01.290827Z INFO Daemon Daemon Provisioning complete Oct 30 00:06:01.299027 waagent[1775]: 2025-10-30T00:06:01.299001Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 30 00:06:01.300547 waagent[1775]: 2025-10-30T00:06:01.300520Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 30 00:06:01.302768 waagent[1775]: 2025-10-30T00:06:01.302707Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 30 00:06:01.391103 waagent[1878]: 2025-10-30T00:06:01.391050Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 30 00:06:01.391303 waagent[1878]: 2025-10-30T00:06:01.391130Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Oct 30 00:06:01.391303 waagent[1878]: 2025-10-30T00:06:01.391167Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 30 00:06:01.391303 waagent[1878]: 2025-10-30T00:06:01.391205Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Oct 30 00:06:01.441261 waagent[1878]: 2025-10-30T00:06:01.441216Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 30 00:06:01.441394 waagent[1878]: 2025-10-30T00:06:01.441367Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 00:06:01.441431 waagent[1878]: 2025-10-30T00:06:01.441419Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 00:06:01.452005 waagent[1878]: 2025-10-30T00:06:01.451956Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 30 00:06:01.467182 waagent[1878]: 2025-10-30T00:06:01.467154Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 30 00:06:01.467502 waagent[1878]: 2025-10-30T00:06:01.467477Z INFO ExtHandler Oct 30 00:06:01.467546 waagent[1878]: 2025-10-30T00:06:01.467523Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6e56574b-8cb9-4aba-8fa4-09a984f4949c eTag: 9177463258849795832 source: Fabric] Oct 30 00:06:01.467721 waagent[1878]: 2025-10-30T00:06:01.467698Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 30 00:06:01.468008 waagent[1878]: 2025-10-30T00:06:01.467984Z INFO ExtHandler Oct 30 00:06:01.468038 waagent[1878]: 2025-10-30T00:06:01.468018Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 30 00:06:01.471551 waagent[1878]: 2025-10-30T00:06:01.471529Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 30 00:06:01.529503 waagent[1878]: 2025-10-30T00:06:01.529460Z INFO ExtHandler Downloaded certificate {'thumbprint': '57736B0BCA5D5F00E174D4FA9CB70E01463F1B0B', 'hasPrivateKey': True} Oct 30 00:06:01.529787 waagent[1878]: 2025-10-30T00:06:01.529762Z INFO ExtHandler Fetch goal state completed Oct 30 00:06:01.543358 waagent[1878]: 2025-10-30T00:06:01.543289Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 30 00:06:01.546808 waagent[1878]: 2025-10-30T00:06:01.546766Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1878 Oct 30 00:06:01.546891 waagent[1878]: 2025-10-30T00:06:01.546869Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 30 00:06:01.547101 waagent[1878]: 2025-10-30T00:06:01.547081Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 30 00:06:01.548029 waagent[1878]: 2025-10-30T00:06:01.547998Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 30 00:06:01.548315 waagent[1878]: 2025-10-30T00:06:01.548266Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 30 00:06:01.548418 waagent[1878]: 2025-10-30T00:06:01.548400Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 30 00:06:01.548753 waagent[1878]: 2025-10-30T00:06:01.548735Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 30 00:06:01.622147 waagent[1878]: 2025-10-30T00:06:01.622124Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 30 00:06:01.622255 waagent[1878]: 2025-10-30T00:06:01.622235Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 30 00:06:01.626961 waagent[1878]: 2025-10-30T00:06:01.626671Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 30 00:06:01.631056 systemd[1]: Reload requested from client PID 1893 ('systemctl') (unit waagent.service)... Oct 30 00:06:01.631068 systemd[1]: Reloading... Oct 30 00:06:01.690315 zram_generator::config[1932]: No configuration found. Oct 30 00:06:01.850594 systemd[1]: Reloading finished in 219 ms. Oct 30 00:06:01.863122 waagent[1878]: 2025-10-30T00:06:01.863071Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 30 00:06:01.863183 waagent[1878]: 2025-10-30T00:06:01.863159Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 30 00:06:01.908643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#2 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Oct 30 00:06:02.288896 waagent[1878]: 2025-10-30T00:06:02.288858Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 30 00:06:02.289081 waagent[1878]: 2025-10-30T00:06:02.289059Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 30 00:06:02.289670 waagent[1878]: 2025-10-30T00:06:02.289568Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 30 00:06:02.289812 waagent[1878]: 2025-10-30T00:06:02.289773Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 00:06:02.289873 waagent[1878]: 2025-10-30T00:06:02.289842Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 00:06:02.290008 waagent[1878]: 2025-10-30T00:06:02.289989Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 30 00:06:02.290174 waagent[1878]: 2025-10-30T00:06:02.290154Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 30 00:06:02.290363 waagent[1878]: 2025-10-30T00:06:02.290327Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 30 00:06:02.290457 waagent[1878]: 2025-10-30T00:06:02.290437Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 30 00:06:02.290457 waagent[1878]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 30 00:06:02.290457 waagent[1878]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 30 00:06:02.290457 waagent[1878]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 30 00:06:02.290457 waagent[1878]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 30 00:06:02.290457 waagent[1878]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 30 00:06:02.290457 waagent[1878]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 30 00:06:02.290734 waagent[1878]: 2025-10-30T00:06:02.290698Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 30 00:06:02.290976 waagent[1878]: 2025-10-30T00:06:02.290916Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 30 00:06:02.291067 waagent[1878]: 2025-10-30T00:06:02.291046Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 30 00:06:02.291124 waagent[1878]: 2025-10-30T00:06:02.291092Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 00:06:02.291182 waagent[1878]: 2025-10-30T00:06:02.291168Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 00:06:02.291357 waagent[1878]: 2025-10-30T00:06:02.291338Z INFO EnvHandler ExtHandler Configure routes Oct 30 00:06:02.291413 waagent[1878]: 2025-10-30T00:06:02.291385Z INFO EnvHandler ExtHandler Gateway:None Oct 30 00:06:02.291444 waagent[1878]: 2025-10-30T00:06:02.291432Z INFO EnvHandler ExtHandler Routes:None Oct 30 00:06:02.291620 waagent[1878]: 2025-10-30T00:06:02.291586Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 30 00:06:02.303852 waagent[1878]: 2025-10-30T00:06:02.303821Z INFO ExtHandler ExtHandler Oct 30 00:06:02.303907 waagent[1878]: 2025-10-30T00:06:02.303873Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e8628e42-3489-4a16-9a5b-bf22621cdb84 correlation 3078988a-f3fa-49c4-b808-6ac61d70943e created: 2025-10-30T00:04:45.899216Z] Oct 30 00:06:02.304103 waagent[1878]: 2025-10-30T00:06:02.304081Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 30 00:06:02.304471 waagent[1878]: 2025-10-30T00:06:02.304450Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Oct 30 00:06:02.335350 waagent[1878]: 2025-10-30T00:06:02.335266Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 30 00:06:02.335350 waagent[1878]: Try `iptables -h' or 'iptables --help' for more information.) Oct 30 00:06:02.335811 waagent[1878]: 2025-10-30T00:06:02.335771Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2A20EA98-7DE4-49AF-8F25-50EB10D9C454;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 30 00:06:02.340432 waagent[1878]: 2025-10-30T00:06:02.340394Z INFO MonitorHandler ExtHandler Network interfaces: Oct 30 00:06:02.340432 waagent[1878]: Executing ['ip', '-a', '-o', 'link']: Oct 30 00:06:02.340432 waagent[1878]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 30 00:06:02.340432 waagent[1878]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:f8:9f brd ff:ff:ff:ff:ff:ff\ alias Network Device Oct 30 00:06:02.340432 waagent[1878]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:f8:9f brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Oct 30 00:06:02.340432 waagent[1878]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 30 00:06:02.340432 waagent[1878]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 30 00:06:02.340432 waagent[1878]: 2: eth0 inet 10.200.8.44/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 30 00:06:02.340432 waagent[1878]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 30 00:06:02.340432 waagent[1878]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 30 00:06:02.340432 waagent[1878]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:f89f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 30 00:06:02.630771 waagent[1878]: 2025-10-30T00:06:02.630698Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 30 00:06:02.630771 waagent[1878]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.630771 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.630771 waagent[1878]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.630771 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.630771 waagent[1878]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.630771 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.630771 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 30 00:06:02.630771 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 30 00:06:02.630771 waagent[1878]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 30 00:06:02.633044 waagent[1878]: 2025-10-30T00:06:02.633002Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 30 00:06:02.633044 waagent[1878]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.633044 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.633044 waagent[1878]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.633044 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.633044 waagent[1878]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 00:06:02.633044 waagent[1878]: pkts bytes target prot opt in out source destination Oct 30 00:06:02.633044 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 30 00:06:02.633044 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 30 00:06:02.633044 waagent[1878]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 30 00:06:08.974682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:06:08.975939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:09.479073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:09.485458 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:06:09.514448 kubelet[2030]: E1030 00:06:09.514421 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:06:09.517055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:06:09.517183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:06:09.517488 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.5M memory peak. Oct 30 00:06:19.724732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:06:19.726004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:20.231098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:20.243507 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:06:20.274816 kubelet[2045]: E1030 00:06:20.274770 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:06:20.276256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:06:20.276383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:06:20.276687 systemd[1]: kubelet.service: Consumed 111ms CPU time, 108.8M memory peak. Oct 30 00:06:20.998213 chronyd[1654]: Selected source PHC0 Oct 30 00:06:30.474934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 00:06:30.476219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:31.133068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:31.135952 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:06:31.166429 kubelet[2060]: E1030 00:06:31.166394 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:06:31.167756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:06:31.167876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:06:31.168159 systemd[1]: kubelet.service: Consumed 111ms CPU time, 110.4M memory peak. Oct 30 00:06:32.147265 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:06:32.148248 systemd[1]: Started sshd@0-10.200.8.44:22-10.200.16.10:60948.service - OpenSSH per-connection server daemon (10.200.16.10:60948). Oct 30 00:06:32.911670 sshd[2068]: Accepted publickey for core from 10.200.16.10 port 60948 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:32.912512 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:32.916070 systemd-logind[1672]: New session 3 of user core. Oct 30 00:06:32.922412 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:06:33.470224 systemd[1]: Started sshd@1-10.200.8.44:22-10.200.16.10:60952.service - OpenSSH per-connection server daemon (10.200.16.10:60952). Oct 30 00:06:34.094109 sshd[2074]: Accepted publickey for core from 10.200.16.10 port 60952 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:34.094939 sshd-session[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:34.098445 systemd-logind[1672]: New session 4 of user core. Oct 30 00:06:34.104389 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:06:34.539614 sshd[2077]: Connection closed by 10.200.16.10 port 60952 Oct 30 00:06:34.539942 sshd-session[2074]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:34.542231 systemd[1]: sshd@1-10.200.8.44:22-10.200.16.10:60952.service: Deactivated successfully. Oct 30 00:06:34.543418 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:06:34.543974 systemd-logind[1672]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:06:34.544843 systemd-logind[1672]: Removed session 4. Oct 30 00:06:34.654133 systemd[1]: Started sshd@2-10.200.8.44:22-10.200.16.10:60960.service - OpenSSH per-connection server daemon (10.200.16.10:60960). Oct 30 00:06:35.283815 sshd[2083]: Accepted publickey for core from 10.200.16.10 port 60960 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:35.284561 sshd-session[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:35.288052 systemd-logind[1672]: New session 5 of user core. Oct 30 00:06:35.294374 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:06:35.733927 sshd[2086]: Connection closed by 10.200.16.10 port 60960 Oct 30 00:06:35.734244 sshd-session[2083]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:35.736483 systemd[1]: sshd@2-10.200.8.44:22-10.200.16.10:60960.service: Deactivated successfully. Oct 30 00:06:35.737594 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:06:35.738210 systemd-logind[1672]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:06:35.739072 systemd-logind[1672]: Removed session 5. Oct 30 00:06:35.846053 systemd[1]: Started sshd@3-10.200.8.44:22-10.200.16.10:60970.service - OpenSSH per-connection server daemon (10.200.16.10:60970). Oct 30 00:06:36.472816 sshd[2092]: Accepted publickey for core from 10.200.16.10 port 60970 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:36.473697 sshd-session[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:36.477328 systemd-logind[1672]: New session 6 of user core. Oct 30 00:06:36.483396 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:06:36.920568 sshd[2095]: Connection closed by 10.200.16.10 port 60970 Oct 30 00:06:36.920887 sshd-session[2092]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:36.923153 systemd[1]: sshd@3-10.200.8.44:22-10.200.16.10:60970.service: Deactivated successfully. Oct 30 00:06:36.924338 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:06:36.925087 systemd-logind[1672]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:06:36.925747 systemd-logind[1672]: Removed session 6. Oct 30 00:06:37.041087 systemd[1]: Started sshd@4-10.200.8.44:22-10.200.16.10:60982.service - OpenSSH per-connection server daemon (10.200.16.10:60982). Oct 30 00:06:37.668650 sshd[2101]: Accepted publickey for core from 10.200.16.10 port 60982 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:37.669448 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:37.672982 systemd-logind[1672]: New session 7 of user core. Oct 30 00:06:37.678411 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:06:38.164820 sudo[2105]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:06:38.165018 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:06:38.190851 sudo[2105]: pam_unix(sudo:session): session closed for user root Oct 30 00:06:38.292581 sshd[2104]: Connection closed by 10.200.16.10 port 60982 Oct 30 00:06:38.293023 sshd-session[2101]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:38.295753 systemd[1]: sshd@4-10.200.8.44:22-10.200.16.10:60982.service: Deactivated successfully. Oct 30 00:06:38.296927 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:06:38.297551 systemd-logind[1672]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:06:38.298456 systemd-logind[1672]: Removed session 7. Oct 30 00:06:38.414215 systemd[1]: Started sshd@5-10.200.8.44:22-10.200.16.10:60984.service - OpenSSH per-connection server daemon (10.200.16.10:60984). Oct 30 00:06:38.565039 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 30 00:06:39.040176 sshd[2111]: Accepted publickey for core from 10.200.16.10 port 60984 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:39.041034 sshd-session[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:39.044641 systemd-logind[1672]: New session 8 of user core. Oct 30 00:06:39.052407 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:06:39.381632 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:06:39.381820 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:06:39.390446 sudo[2116]: pam_unix(sudo:session): session closed for user root Oct 30 00:06:39.393704 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:06:39.393888 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:06:39.400389 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:06:39.427609 augenrules[2138]: No rules Oct 30 00:06:39.428432 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:06:39.428614 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:06:39.429434 sudo[2115]: pam_unix(sudo:session): session closed for user root Oct 30 00:06:39.549150 sshd[2114]: Connection closed by 10.200.16.10 port 60984 Oct 30 00:06:39.549499 sshd-session[2111]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:39.551657 systemd[1]: sshd@5-10.200.8.44:22-10.200.16.10:60984.service: Deactivated successfully. Oct 30 00:06:39.552902 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:06:39.554369 systemd-logind[1672]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:06:39.555031 systemd-logind[1672]: Removed session 8. Oct 30 00:06:39.662253 systemd[1]: Started sshd@6-10.200.8.44:22-10.200.16.10:60998.service - OpenSSH per-connection server daemon (10.200.16.10:60998). Oct 30 00:06:40.291181 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 60998 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:06:40.291944 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:06:40.295518 systemd-logind[1672]: New session 9 of user core. Oct 30 00:06:40.301387 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:06:40.631900 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:06:40.632090 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:06:41.224621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 30 00:06:41.226664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:41.975622 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:06:41.983509 (dockerd)[2172]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:06:42.049425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:42.061520 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:06:42.093578 kubelet[2178]: E1030 00:06:42.093530 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:06:42.094881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:06:42.094989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:06:42.095259 systemd[1]: kubelet.service: Consumed 115ms CPU time, 110.6M memory peak. Oct 30 00:06:42.793369 update_engine[1673]: I20251030 00:06:42.793324 1673 update_attempter.cc:509] Updating boot flags... Oct 30 00:06:43.280144 dockerd[2172]: time="2025-10-30T00:06:43.279747435Z" level=info msg="Starting up" Oct 30 00:06:43.281805 dockerd[2172]: time="2025-10-30T00:06:43.281784229Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:06:43.301933 dockerd[2172]: time="2025-10-30T00:06:43.301903102Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:06:43.401937 dockerd[2172]: time="2025-10-30T00:06:43.401827683Z" level=info msg="Loading containers: start." Oct 30 00:06:43.464292 kernel: Initializing XFRM netlink socket Oct 30 00:06:43.856620 systemd-networkd[1334]: docker0: Link UP Oct 30 00:06:43.894113 dockerd[2172]: time="2025-10-30T00:06:43.894038181Z" level=info msg="Loading containers: done." Oct 30 00:06:43.954783 dockerd[2172]: time="2025-10-30T00:06:43.954759750Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:06:43.954882 dockerd[2172]: time="2025-10-30T00:06:43.954814842Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:06:43.954882 dockerd[2172]: time="2025-10-30T00:06:43.954876144Z" level=info msg="Initializing buildkit" Oct 30 00:06:44.002660 dockerd[2172]: time="2025-10-30T00:06:44.002634601Z" level=info msg="Completed buildkit initialization" Oct 30 00:06:44.005095 dockerd[2172]: time="2025-10-30T00:06:44.005056969Z" level=info msg="Daemon has completed initialization" Oct 30 00:06:44.005305 dockerd[2172]: time="2025-10-30T00:06:44.005165011Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:06:44.005234 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:06:45.180143 containerd[1697]: time="2025-10-30T00:06:45.180109484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 00:06:45.990545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858166860.mount: Deactivated successfully. Oct 30 00:06:47.248465 containerd[1697]: time="2025-10-30T00:06:47.248425038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:47.252505 containerd[1697]: time="2025-10-30T00:06:47.252384362Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Oct 30 00:06:47.260315 containerd[1697]: time="2025-10-30T00:06:47.260295103Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:47.271199 containerd[1697]: time="2025-10-30T00:06:47.271174583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:47.271877 containerd[1697]: time="2025-10-30T00:06:47.271736339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.091596452s" Oct 30 00:06:47.271877 containerd[1697]: time="2025-10-30T00:06:47.271765997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 00:06:47.272303 containerd[1697]: time="2025-10-30T00:06:47.272264866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 00:06:48.584503 containerd[1697]: time="2025-10-30T00:06:48.584473023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:48.587288 containerd[1697]: time="2025-10-30T00:06:48.587256036Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Oct 30 00:06:48.590963 containerd[1697]: time="2025-10-30T00:06:48.590929528Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:48.594365 containerd[1697]: time="2025-10-30T00:06:48.594328937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:48.595045 containerd[1697]: time="2025-10-30T00:06:48.594926865Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.322621923s" Oct 30 00:06:48.595045 containerd[1697]: time="2025-10-30T00:06:48.594952398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 00:06:48.595416 containerd[1697]: time="2025-10-30T00:06:48.595385474Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 00:06:49.887335 containerd[1697]: time="2025-10-30T00:06:49.887300661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.890446 containerd[1697]: time="2025-10-30T00:06:49.890332442Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Oct 30 00:06:49.893414 containerd[1697]: time="2025-10-30T00:06:49.893392887Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.897570 containerd[1697]: time="2025-10-30T00:06:49.897544035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.898072 containerd[1697]: time="2025-10-30T00:06:49.898051771Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.302540116s" Oct 30 00:06:49.898106 containerd[1697]: time="2025-10-30T00:06:49.898080150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 00:06:49.898599 containerd[1697]: time="2025-10-30T00:06:49.898576079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 00:06:50.822764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210120780.mount: Deactivated successfully. Oct 30 00:06:51.187073 containerd[1697]: time="2025-10-30T00:06:51.187005374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:51.190022 containerd[1697]: time="2025-10-30T00:06:51.189996466Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Oct 30 00:06:51.193348 containerd[1697]: time="2025-10-30T00:06:51.193314298Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:51.198121 containerd[1697]: time="2025-10-30T00:06:51.197798423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:51.198121 containerd[1697]: time="2025-10-30T00:06:51.198031762Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.299430481s" Oct 30 00:06:51.198121 containerd[1697]: time="2025-10-30T00:06:51.198052837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 00:06:51.198560 containerd[1697]: time="2025-10-30T00:06:51.198545898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 00:06:51.822880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068070539.mount: Deactivated successfully. Oct 30 00:06:52.224481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 30 00:06:52.226833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:52.924691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:52.928560 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:06:52.958401 kubelet[2534]: E1030 00:06:52.958373 2534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:06:52.959729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:06:52.959841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:06:52.960139 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.4M memory peak. Oct 30 00:06:53.455792 containerd[1697]: time="2025-10-30T00:06:53.455756325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:53.459511 containerd[1697]: time="2025-10-30T00:06:53.459481946Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Oct 30 00:06:53.463019 containerd[1697]: time="2025-10-30T00:06:53.462985592Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:53.468494 containerd[1697]: time="2025-10-30T00:06:53.468458843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:53.469145 containerd[1697]: time="2025-10-30T00:06:53.469029156Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.270396618s" Oct 30 00:06:53.469145 containerd[1697]: time="2025-10-30T00:06:53.469059369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 00:06:53.469613 containerd[1697]: time="2025-10-30T00:06:53.469594070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:06:54.084833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547585314.mount: Deactivated successfully. Oct 30 00:06:54.112547 containerd[1697]: time="2025-10-30T00:06:54.112516703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:06:54.115472 containerd[1697]: time="2025-10-30T00:06:54.115441942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 30 00:06:54.120352 containerd[1697]: time="2025-10-30T00:06:54.120319743Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:06:54.125574 containerd[1697]: time="2025-10-30T00:06:54.125539783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:06:54.126175 containerd[1697]: time="2025-10-30T00:06:54.125869715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 656.251856ms" Oct 30 00:06:54.126175 containerd[1697]: time="2025-10-30T00:06:54.125894418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:06:54.126439 containerd[1697]: time="2025-10-30T00:06:54.126422422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 00:06:54.744435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525846973.mount: Deactivated successfully. Oct 30 00:06:56.669268 containerd[1697]: time="2025-10-30T00:06:56.668839195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:56.671390 containerd[1697]: time="2025-10-30T00:06:56.671369683Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Oct 30 00:06:56.675689 containerd[1697]: time="2025-10-30T00:06:56.675669535Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:56.679330 containerd[1697]: time="2025-10-30T00:06:56.679307983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:56.679950 containerd[1697]: time="2025-10-30T00:06:56.679930053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.553486446s" Oct 30 00:06:56.679992 containerd[1697]: time="2025-10-30T00:06:56.679958238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 00:06:59.871006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:59.871136 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.4M memory peak. Oct 30 00:06:59.873129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:59.896052 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-9.scope)... Oct 30 00:06:59.896063 systemd[1]: Reloading... Oct 30 00:06:59.953272 zram_generator::config[2694]: No configuration found. Oct 30 00:07:00.128628 systemd[1]: Reloading finished in 232 ms. Oct 30 00:07:00.157684 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:07:00.157743 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:07:00.157941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:07:00.159511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:07:00.629116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:07:00.632151 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:07:00.666818 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:07:00.666818 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:07:00.666818 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:07:00.667039 kubelet[2765]: I1030 00:07:00.666861 2765 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:07:01.105899 kubelet[2765]: I1030 00:07:01.105876 2765 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:07:01.105899 kubelet[2765]: I1030 00:07:01.105893 2765 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:07:01.106057 kubelet[2765]: I1030 00:07:01.106046 2765 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:07:01.132129 kubelet[2765]: E1030 00:07:01.132100 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:07:01.132568 kubelet[2765]: I1030 00:07:01.132381 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:07:01.137719 kubelet[2765]: I1030 00:07:01.137706 2765 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:07:01.140903 kubelet[2765]: I1030 00:07:01.140888 2765 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:07:01.141071 kubelet[2765]: I1030 00:07:01.141053 2765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:07:01.141190 kubelet[2765]: I1030 00:07:01.141070 2765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-666d628454","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:07:01.141300 kubelet[2765]: I1030 00:07:01.141195 2765 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:07:01.141300 kubelet[2765]: I1030 00:07:01.141202 2765 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:07:01.141895 kubelet[2765]: I1030 00:07:01.141881 2765 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:07:01.143790 kubelet[2765]: I1030 00:07:01.143775 2765 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:07:01.143790 kubelet[2765]: I1030 00:07:01.143789 2765 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:07:01.143878 kubelet[2765]: I1030 00:07:01.143867 2765 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:07:01.146121 kubelet[2765]: I1030 00:07:01.146107 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:07:01.152910 kubelet[2765]: I1030 00:07:01.152645 2765 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:07:01.153057 kubelet[2765]: I1030 00:07:01.153038 2765 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:07:01.154051 kubelet[2765]: W1030 00:07:01.154034 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:07:01.155916 kubelet[2765]: I1030 00:07:01.155901 2765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:07:01.155977 kubelet[2765]: I1030 00:07:01.155942 2765 server.go:1289] "Started kubelet" Oct 30 00:07:01.156307 kubelet[2765]: E1030 00:07:01.156094 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-666d628454&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:07:01.161225 kubelet[2765]: I1030 00:07:01.161207 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:07:01.166205 kubelet[2765]: E1030 00:07:01.165717 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:07:01.167875 kubelet[2765]: I1030 00:07:01.167846 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:07:01.169300 kubelet[2765]: I1030 00:07:01.168511 2765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:07:01.169300 kubelet[2765]: I1030 00:07:01.168788 2765 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:07:01.169300 kubelet[2765]: E1030 00:07:01.168919 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:01.169402 kubelet[2765]: I1030 00:07:01.169372 2765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:07:01.169433 kubelet[2765]: I1030 00:07:01.169423 2765 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:07:01.169482 kubelet[2765]: I1030 00:07:01.169474 2765 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:07:01.169694 kubelet[2765]: I1030 00:07:01.169655 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:07:01.169834 kubelet[2765]: I1030 00:07:01.169824 2765 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:07:01.170542 kubelet[2765]: E1030 00:07:01.170516 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:07:01.170602 kubelet[2765]: E1030 00:07:01.170582 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-666d628454?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="200ms" Oct 30 00:07:01.171806 kubelet[2765]: E1030 00:07:01.170627 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-666d628454.18731c243a6243af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-666d628454,UID:ci-4459.1.0-n-666d628454,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-666d628454,},FirstTimestamp:2025-10-30 00:07:01.155914671 +0000 UTC m=+0.520797468,LastTimestamp:2025-10-30 00:07:01.155914671 +0000 UTC m=+0.520797468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-666d628454,}" Oct 30 00:07:01.173313 kubelet[2765]: I1030 00:07:01.172992 2765 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:07:01.173313 kubelet[2765]: I1030 00:07:01.173059 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:07:01.174526 kubelet[2765]: E1030 00:07:01.174509 2765 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:07:01.174603 kubelet[2765]: I1030 00:07:01.174595 2765 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:07:01.197373 kubelet[2765]: I1030 00:07:01.197363 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:07:01.197581 kubelet[2765]: I1030 00:07:01.197493 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:07:01.197581 kubelet[2765]: I1030 00:07:01.197506 2765 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:07:01.209521 kubelet[2765]: I1030 00:07:01.209394 2765 policy_none.go:49] "None policy: Start" Oct 30 00:07:01.209521 kubelet[2765]: I1030 00:07:01.209408 2765 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:07:01.209521 kubelet[2765]: I1030 00:07:01.209415 2765 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:07:01.219455 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:07:01.227821 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:07:01.234851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:07:01.236031 kubelet[2765]: E1030 00:07:01.235991 2765 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:07:01.236207 kubelet[2765]: I1030 00:07:01.236191 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:07:01.236234 kubelet[2765]: I1030 00:07:01.236199 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:07:01.237217 kubelet[2765]: I1030 00:07:01.237117 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:07:01.238102 kubelet[2765]: E1030 00:07:01.238087 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:07:01.238189 kubelet[2765]: E1030 00:07:01.238183 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:01.263755 kubelet[2765]: I1030 00:07:01.263685 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:07:01.264787 kubelet[2765]: I1030 00:07:01.264768 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:07:01.264787 kubelet[2765]: I1030 00:07:01.264786 2765 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:07:01.264858 kubelet[2765]: I1030 00:07:01.264799 2765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:07:01.264858 kubelet[2765]: I1030 00:07:01.264805 2765 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:07:01.264858 kubelet[2765]: E1030 00:07:01.264833 2765 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 30 00:07:01.266174 kubelet[2765]: E1030 00:07:01.266139 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:07:01.337632 kubelet[2765]: I1030 00:07:01.337620 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.337825 kubelet[2765]: E1030 00:07:01.337806 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.370727 kubelet[2765]: I1030 00:07:01.370288 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.370727 kubelet[2765]: I1030 00:07:01.370330 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.370727 kubelet[2765]: I1030 00:07:01.370351 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.371664 kubelet[2765]: E1030 00:07:01.371623 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-666d628454?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="400ms" Oct 30 00:07:01.379493 systemd[1]: Created slice kubepods-burstable-pod84561da6fdda0a91e6b45de04c97c6df.slice - libcontainer container kubepods-burstable-pod84561da6fdda0a91e6b45de04c97c6df.slice. Oct 30 00:07:01.397825 kubelet[2765]: E1030 00:07:01.397811 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.400610 systemd[1]: Created slice kubepods-burstable-podc1125967a3a3cf71640de75a1eaafb38.slice - libcontainer container kubepods-burstable-podc1125967a3a3cf71640de75a1eaafb38.slice. Oct 30 00:07:01.405955 kubelet[2765]: E1030 00:07:01.402370 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.416973 systemd[1]: Created slice kubepods-burstable-pod651c844fef96a72b1b773ab8fdbdd85a.slice - libcontainer container kubepods-burstable-pod651c844fef96a72b1b773ab8fdbdd85a.slice. Oct 30 00:07:01.418700 kubelet[2765]: E1030 00:07:01.418681 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471028 kubelet[2765]: I1030 00:07:01.470995 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471096 kubelet[2765]: I1030 00:07:01.471033 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471096 kubelet[2765]: I1030 00:07:01.471072 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471096 kubelet[2765]: I1030 00:07:01.471088 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471161 kubelet[2765]: I1030 00:07:01.471104 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.471161 kubelet[2765]: I1030 00:07:01.471118 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/651c844fef96a72b1b773ab8fdbdd85a-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-666d628454\" (UID: \"651c844fef96a72b1b773ab8fdbdd85a\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:01.539001 kubelet[2765]: I1030 00:07:01.538985 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.539193 kubelet[2765]: E1030 00:07:01.539174 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.698974 containerd[1697]: time="2025-10-30T00:07:01.698908602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-666d628454,Uid:84561da6fdda0a91e6b45de04c97c6df,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:01.706891 containerd[1697]: time="2025-10-30T00:07:01.706869522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-666d628454,Uid:c1125967a3a3cf71640de75a1eaafb38,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:01.719513 containerd[1697]: time="2025-10-30T00:07:01.719491647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-666d628454,Uid:651c844fef96a72b1b773ab8fdbdd85a,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:01.771939 kubelet[2765]: E1030 00:07:01.771904 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-666d628454?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="800ms" Oct 30 00:07:01.940323 kubelet[2765]: I1030 00:07:01.940305 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.940531 kubelet[2765]: E1030 00:07:01.940481 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:01.993667 containerd[1697]: time="2025-10-30T00:07:01.992956903Z" level=info msg="connecting to shim 7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078" address="unix:///run/containerd/s/69d84b2ad86d972eafbbbcd1bc6ad23c201734e28a99bdce5ef393ea863b2cdc" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:01.995056 kubelet[2765]: E1030 00:07:01.995028 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-666d628454&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:07:01.997258 containerd[1697]: time="2025-10-30T00:07:01.997226479Z" level=info msg="connecting to shim 41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498" address="unix:///run/containerd/s/932340452bc78b667e31af0f732ca13dd832377677a4a1687dcf04d25aee9484" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:01.998268 containerd[1697]: time="2025-10-30T00:07:01.998246350Z" level=info msg="connecting to shim d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9" address="unix:///run/containerd/s/58708a5903dcae84cc87ad7b11cc44a8a4014834513d42c4c4ddf7c58f22c3fd" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:02.024410 systemd[1]: Started cri-containerd-7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078.scope - libcontainer container 7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078. Oct 30 00:07:02.026744 systemd[1]: Started cri-containerd-41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498.scope - libcontainer container 41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498. Oct 30 00:07:02.033886 systemd[1]: Started cri-containerd-d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9.scope - libcontainer container d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9. Oct 30 00:07:02.097578 containerd[1697]: time="2025-10-30T00:07:02.097529845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-666d628454,Uid:84561da6fdda0a91e6b45de04c97c6df,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078\"" Oct 30 00:07:02.102134 containerd[1697]: time="2025-10-30T00:07:02.102112547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-666d628454,Uid:c1125967a3a3cf71640de75a1eaafb38,Namespace:kube-system,Attempt:0,} returns sandbox id \"d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9\"" Oct 30 00:07:02.106871 containerd[1697]: time="2025-10-30T00:07:02.106845237Z" level=info msg="CreateContainer within sandbox \"7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:07:02.109202 containerd[1697]: time="2025-10-30T00:07:02.109179242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-666d628454,Uid:651c844fef96a72b1b773ab8fdbdd85a,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498\"" Oct 30 00:07:02.117773 containerd[1697]: time="2025-10-30T00:07:02.117390199Z" level=info msg="CreateContainer within sandbox \"d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:07:02.139822 containerd[1697]: time="2025-10-30T00:07:02.139797522Z" level=info msg="CreateContainer within sandbox \"41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:07:02.152282 containerd[1697]: time="2025-10-30T00:07:02.152257802Z" level=info msg="Container b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:02.201643 containerd[1697]: time="2025-10-30T00:07:02.201607904Z" level=info msg="Container 194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:02.213517 containerd[1697]: time="2025-10-30T00:07:02.213497565Z" level=info msg="Container c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:02.214200 containerd[1697]: time="2025-10-30T00:07:02.214168236Z" level=info msg="CreateContainer within sandbox \"7d5cad5957ba84509050c9a67a67241105f660205964587662b293ade5d2d078\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef\"" Oct 30 00:07:02.214644 containerd[1697]: time="2025-10-30T00:07:02.214580816Z" level=info msg="StartContainer for \"b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef\"" Oct 30 00:07:02.218458 containerd[1697]: time="2025-10-30T00:07:02.218437361Z" level=info msg="connecting to shim b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef" address="unix:///run/containerd/s/69d84b2ad86d972eafbbbcd1bc6ad23c201734e28a99bdce5ef393ea863b2cdc" protocol=ttrpc version=3 Oct 30 00:07:02.238179 containerd[1697]: time="2025-10-30T00:07:02.238115203Z" level=info msg="CreateContainer within sandbox \"d88a83463ab08226cdfcc487aa495d8e526ee84e282f8943055de7dc79df55e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74\"" Oct 30 00:07:02.238478 containerd[1697]: time="2025-10-30T00:07:02.238457939Z" level=info msg="StartContainer for \"194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74\"" Oct 30 00:07:02.239372 containerd[1697]: time="2025-10-30T00:07:02.239298141Z" level=info msg="connecting to shim 194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74" address="unix:///run/containerd/s/58708a5903dcae84cc87ad7b11cc44a8a4014834513d42c4c4ddf7c58f22c3fd" protocol=ttrpc version=3 Oct 30 00:07:02.239487 systemd[1]: Started cri-containerd-b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef.scope - libcontainer container b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef. Oct 30 00:07:02.247780 kubelet[2765]: E1030 00:07:02.247655 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:07:02.257405 systemd[1]: Started cri-containerd-194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74.scope - libcontainer container 194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74. Oct 30 00:07:02.263133 containerd[1697]: time="2025-10-30T00:07:02.263099449Z" level=info msg="CreateContainer within sandbox \"41b954b2d1c04c27db89672b0c241e07a91ba421d6fbce70eddc2e2ad968a498\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19\"" Oct 30 00:07:02.263753 containerd[1697]: time="2025-10-30T00:07:02.263655683Z" level=info msg="StartContainer for \"c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19\"" Oct 30 00:07:02.264860 containerd[1697]: time="2025-10-30T00:07:02.264814784Z" level=info msg="connecting to shim c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19" address="unix:///run/containerd/s/932340452bc78b667e31af0f732ca13dd832377677a4a1687dcf04d25aee9484" protocol=ttrpc version=3 Oct 30 00:07:02.285494 systemd[1]: Started cri-containerd-c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19.scope - libcontainer container c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19. Oct 30 00:07:02.287638 kubelet[2765]: E1030 00:07:02.287590 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:07:02.313649 containerd[1697]: time="2025-10-30T00:07:02.313630433Z" level=info msg="StartContainer for \"b5bd4601f510c34c8b05d66d57d52362cfbae3825e0c5748532fb0dd05a5fcef\" returns successfully" Oct 30 00:07:02.319923 kubelet[2765]: E1030 00:07:02.319899 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:07:02.338928 containerd[1697]: time="2025-10-30T00:07:02.338898839Z" level=info msg="StartContainer for \"c4d37d1e70445f1a7c2f60c1ba616278b0a1e07d625dd1bc8a7006f5a2f5db19\" returns successfully" Oct 30 00:07:02.388322 containerd[1697]: time="2025-10-30T00:07:02.388302157Z" level=info msg="StartContainer for \"194f613c71236d1fe9ab9f178d01a030a7708f1099a7dfbb25d9c080a1884c74\" returns successfully" Oct 30 00:07:02.742932 kubelet[2765]: I1030 00:07:02.742905 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:03.284826 kubelet[2765]: E1030 00:07:03.284801 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:03.289293 kubelet[2765]: E1030 00:07:03.288338 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:03.290856 kubelet[2765]: E1030 00:07:03.290838 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.144208 kubelet[2765]: E1030 00:07:04.144159 2765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.218848 kubelet[2765]: I1030 00:07:04.218817 2765 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.218848 kubelet[2765]: E1030 00:07:04.218844 2765 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-666d628454\": node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.238892 kubelet[2765]: E1030 00:07:04.238863 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.291247 kubelet[2765]: E1030 00:07:04.291228 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.291533 kubelet[2765]: E1030 00:07:04.291414 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.291591 kubelet[2765]: E1030 00:07:04.291574 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-666d628454\" not found" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:04.339194 kubelet[2765]: E1030 00:07:04.339174 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.439732 kubelet[2765]: E1030 00:07:04.439646 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.540136 kubelet[2765]: E1030 00:07:04.540103 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.640742 kubelet[2765]: E1030 00:07:04.640721 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.741491 kubelet[2765]: E1030 00:07:04.741459 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.841910 kubelet[2765]: E1030 00:07:04.841892 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:04.970130 kubelet[2765]: I1030 00:07:04.970096 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:04.974613 kubelet[2765]: E1030 00:07:04.974392 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-666d628454\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:04.974613 kubelet[2765]: I1030 00:07:04.974409 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:04.975664 kubelet[2765]: E1030 00:07:04.975644 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-666d628454\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:04.975664 kubelet[2765]: I1030 00:07:04.975662 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:04.976969 kubelet[2765]: E1030 00:07:04.976954 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-666d628454\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:05.161450 kubelet[2765]: I1030 00:07:05.161394 2765 apiserver.go:52] "Watching apiserver" Oct 30 00:07:05.170498 kubelet[2765]: I1030 00:07:05.170478 2765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:07:05.291961 kubelet[2765]: I1030 00:07:05.291923 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:05.292216 kubelet[2765]: I1030 00:07:05.292186 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:05.299494 kubelet[2765]: I1030 00:07:05.299297 2765 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:05.299571 kubelet[2765]: I1030 00:07:05.299554 2765 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:06.117626 systemd[1]: Reload requested from client PID 3046 ('systemctl') (unit session-9.scope)... Oct 30 00:07:06.117639 systemd[1]: Reloading... Oct 30 00:07:06.192319 zram_generator::config[3096]: No configuration found. Oct 30 00:07:06.357605 systemd[1]: Reloading finished in 239 ms. Oct 30 00:07:06.388576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:07:06.411863 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:07:06.412066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:07:06.412108 systemd[1]: kubelet.service: Consumed 751ms CPU time, 128.9M memory peak. Oct 30 00:07:06.413316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:07:06.841052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:07:06.844339 (kubelet)[3160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:07:06.879915 kubelet[3160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:07:06.880069 kubelet[3160]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:07:06.880099 kubelet[3160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:07:06.880164 kubelet[3160]: I1030 00:07:06.880151 3160 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:07:06.884485 kubelet[3160]: I1030 00:07:06.884466 3160 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:07:06.884548 kubelet[3160]: I1030 00:07:06.884544 3160 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:07:06.884765 kubelet[3160]: I1030 00:07:06.884759 3160 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:07:06.885999 kubelet[3160]: I1030 00:07:06.885986 3160 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:07:06.888592 kubelet[3160]: I1030 00:07:06.888569 3160 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:07:06.899920 kubelet[3160]: I1030 00:07:06.899897 3160 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:07:06.903409 kubelet[3160]: I1030 00:07:06.903378 3160 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:07:06.903658 kubelet[3160]: I1030 00:07:06.903633 3160 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:07:06.903863 kubelet[3160]: I1030 00:07:06.903717 3160 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-666d628454","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:07:06.903986 kubelet[3160]: I1030 00:07:06.903978 3160 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:07:06.904025 kubelet[3160]: I1030 00:07:06.904021 3160 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:07:06.904087 kubelet[3160]: I1030 00:07:06.904083 3160 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:07:06.904241 kubelet[3160]: I1030 00:07:06.904234 3160 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:07:06.904300 kubelet[3160]: I1030 00:07:06.904295 3160 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:07:06.904347 kubelet[3160]: I1030 00:07:06.904343 3160 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:07:06.904380 kubelet[3160]: I1030 00:07:06.904376 3160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:07:06.908976 kubelet[3160]: I1030 00:07:06.908962 3160 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:07:06.909595 kubelet[3160]: I1030 00:07:06.909581 3160 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:07:06.912204 kubelet[3160]: I1030 00:07:06.912193 3160 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:07:06.912326 kubelet[3160]: I1030 00:07:06.912320 3160 server.go:1289] "Started kubelet" Oct 30 00:07:06.914208 kubelet[3160]: I1030 00:07:06.914193 3160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:07:06.919732 kubelet[3160]: I1030 00:07:06.919704 3160 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:07:06.922335 kubelet[3160]: I1030 00:07:06.922323 3160 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:07:06.922586 kubelet[3160]: E1030 00:07:06.922576 3160 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-666d628454\" not found" Oct 30 00:07:06.923704 kubelet[3160]: I1030 00:07:06.923682 3160 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:07:06.923950 kubelet[3160]: I1030 00:07:06.923941 3160 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:07:06.924064 kubelet[3160]: I1030 00:07:06.924058 3160 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:07:06.925494 kubelet[3160]: I1030 00:07:06.925471 3160 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:07:06.926469 kubelet[3160]: I1030 00:07:06.926455 3160 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:07:06.926545 kubelet[3160]: I1030 00:07:06.926539 3160 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:07:06.926588 kubelet[3160]: I1030 00:07:06.926583 3160 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:07:06.926619 kubelet[3160]: I1030 00:07:06.926615 3160 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:07:06.927721 kubelet[3160]: E1030 00:07:06.927559 3160 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:07:06.929071 kubelet[3160]: I1030 00:07:06.929032 3160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:07:06.929189 kubelet[3160]: I1030 00:07:06.929177 3160 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:07:06.929459 kubelet[3160]: I1030 00:07:06.929446 3160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:07:06.934002 kubelet[3160]: E1030 00:07:06.933581 3160 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:07:06.934301 kubelet[3160]: I1030 00:07:06.934211 3160 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:07:06.934363 kubelet[3160]: I1030 00:07:06.934267 3160 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:07:06.939165 kubelet[3160]: I1030 00:07:06.939148 3160 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:07:06.976081 kubelet[3160]: I1030 00:07:06.976066 3160 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:07:06.976081 kubelet[3160]: I1030 00:07:06.976076 3160 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:07:06.976169 kubelet[3160]: I1030 00:07:06.976090 3160 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:07:06.976195 kubelet[3160]: I1030 00:07:06.976173 3160 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:07:06.976195 kubelet[3160]: I1030 00:07:06.976181 3160 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:07:06.976195 kubelet[3160]: I1030 00:07:06.976194 3160 policy_none.go:49] "None policy: Start" Oct 30 00:07:06.976251 kubelet[3160]: I1030 00:07:06.976202 3160 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:07:06.976251 kubelet[3160]: I1030 00:07:06.976210 3160 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:07:06.976355 kubelet[3160]: I1030 00:07:06.976343 3160 state_mem.go:75] "Updated machine memory state" Oct 30 00:07:06.978676 kubelet[3160]: E1030 00:07:06.978666 3160 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:07:06.979195 kubelet[3160]: I1030 00:07:06.979139 3160 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:07:06.979195 kubelet[3160]: I1030 00:07:06.979149 3160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:07:06.979656 kubelet[3160]: I1030 00:07:06.979592 3160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:07:06.980973 kubelet[3160]: E1030 00:07:06.980961 3160 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:07:07.028245 kubelet[3160]: I1030 00:07:07.028110 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.028245 kubelet[3160]: I1030 00:07:07.028163 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.028245 kubelet[3160]: I1030 00:07:07.028110 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.038851 kubelet[3160]: I1030 00:07:07.038834 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:07.038943 kubelet[3160]: I1030 00:07:07.038935 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:07.039006 kubelet[3160]: E1030 00:07:07.038998 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-666d628454\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.039411 kubelet[3160]: I1030 00:07:07.039393 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:07.039463 kubelet[3160]: E1030 00:07:07.039437 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-666d628454\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.084604 kubelet[3160]: I1030 00:07:07.084503 3160 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:07.102769 kubelet[3160]: I1030 00:07:07.102715 3160 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:07.102769 kubelet[3160]: I1030 00:07:07.102761 3160 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125614 kubelet[3160]: I1030 00:07:07.125583 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125682 kubelet[3160]: I1030 00:07:07.125624 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125682 kubelet[3160]: I1030 00:07:07.125641 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125682 kubelet[3160]: I1030 00:07:07.125657 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125682 kubelet[3160]: I1030 00:07:07.125674 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125783 kubelet[3160]: I1030 00:07:07.125690 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/651c844fef96a72b1b773ab8fdbdd85a-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-666d628454\" (UID: \"651c844fef96a72b1b773ab8fdbdd85a\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125783 kubelet[3160]: I1030 00:07:07.125705 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84561da6fdda0a91e6b45de04c97c6df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-666d628454\" (UID: \"84561da6fdda0a91e6b45de04c97c6df\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125783 kubelet[3160]: I1030 00:07:07.125723 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.125783 kubelet[3160]: I1030 00:07:07.125739 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1125967a3a3cf71640de75a1eaafb38-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-666d628454\" (UID: \"c1125967a3a3cf71640de75a1eaafb38\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.908374 kubelet[3160]: I1030 00:07:07.908244 3160 apiserver.go:52] "Watching apiserver" Oct 30 00:07:07.924678 kubelet[3160]: I1030 00:07:07.924650 3160 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:07:07.966098 kubelet[3160]: I1030 00:07:07.966078 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.966909 kubelet[3160]: I1030 00:07:07.966888 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.974201 kubelet[3160]: I1030 00:07:07.974184 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:07.974305 kubelet[3160]: E1030 00:07:07.974240 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-666d628454\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.974916 kubelet[3160]: I1030 00:07:07.974900 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:07:07.974984 kubelet[3160]: E1030 00:07:07.974945 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-666d628454\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" Oct 30 00:07:07.991044 kubelet[3160]: I1030 00:07:07.990998 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-666d628454" podStartSLOduration=2.990988346 podStartE2EDuration="2.990988346s" podCreationTimestamp="2025-10-30 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:07.983468089 +0000 UTC m=+1.135402332" watchObservedRunningTime="2025-10-30 00:07:07.990988346 +0000 UTC m=+1.142922579" Oct 30 00:07:08.000220 kubelet[3160]: I1030 00:07:08.000131 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-666d628454" podStartSLOduration=3.000123493 podStartE2EDuration="3.000123493s" podCreationTimestamp="2025-10-30 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:07.991362547 +0000 UTC m=+1.143296788" watchObservedRunningTime="2025-10-30 00:07:08.000123493 +0000 UTC m=+1.152057730" Oct 30 00:07:08.008784 kubelet[3160]: I1030 00:07:08.008751 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-666d628454" podStartSLOduration=1.00873959 podStartE2EDuration="1.00873959s" podCreationTimestamp="2025-10-30 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:08.000642824 +0000 UTC m=+1.152577220" watchObservedRunningTime="2025-10-30 00:07:08.00873959 +0000 UTC m=+1.160673918" Oct 30 00:07:12.206821 kubelet[3160]: I1030 00:07:12.206754 3160 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:07:12.207175 containerd[1697]: time="2025-10-30T00:07:12.207111327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:07:12.207373 kubelet[3160]: I1030 00:07:12.207263 3160 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:07:15.064072 systemd[1]: Created slice kubepods-besteffort-podd4077c1e_cfe9_48b5_9233_2c64bdb16f1b.slice - libcontainer container kubepods-besteffort-podd4077c1e_cfe9_48b5_9233_2c64bdb16f1b.slice. Oct 30 00:07:15.078496 kubelet[3160]: I1030 00:07:15.078469 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx42\" (UniqueName: \"kubernetes.io/projected/d4077c1e-cfe9-48b5-9233-2c64bdb16f1b-kube-api-access-cjx42\") pod \"kube-proxy-wq2kq\" (UID: \"d4077c1e-cfe9-48b5-9233-2c64bdb16f1b\") " pod="kube-system/kube-proxy-wq2kq" Oct 30 00:07:15.078791 kubelet[3160]: I1030 00:07:15.078765 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4077c1e-cfe9-48b5-9233-2c64bdb16f1b-kube-proxy\") pod \"kube-proxy-wq2kq\" (UID: \"d4077c1e-cfe9-48b5-9233-2c64bdb16f1b\") " pod="kube-system/kube-proxy-wq2kq" Oct 30 00:07:15.078791 kubelet[3160]: I1030 00:07:15.078788 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4077c1e-cfe9-48b5-9233-2c64bdb16f1b-xtables-lock\") pod \"kube-proxy-wq2kq\" (UID: \"d4077c1e-cfe9-48b5-9233-2c64bdb16f1b\") " pod="kube-system/kube-proxy-wq2kq" Oct 30 00:07:15.078849 kubelet[3160]: I1030 00:07:15.078802 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4077c1e-cfe9-48b5-9233-2c64bdb16f1b-lib-modules\") pod \"kube-proxy-wq2kq\" (UID: \"d4077c1e-cfe9-48b5-9233-2c64bdb16f1b\") " pod="kube-system/kube-proxy-wq2kq" Oct 30 00:07:15.373502 containerd[1697]: time="2025-10-30T00:07:15.373426271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq2kq,Uid:d4077c1e-cfe9-48b5-9233-2c64bdb16f1b,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:15.808255 systemd[1]: Created slice kubepods-besteffort-pod324ffed6_ee0b_4ac7_9d15_3134d408190c.slice - libcontainer container kubepods-besteffort-pod324ffed6_ee0b_4ac7_9d15_3134d408190c.slice. Oct 30 00:07:15.883470 kubelet[3160]: I1030 00:07:15.883435 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/324ffed6-ee0b-4ac7-9d15-3134d408190c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tmqw6\" (UID: \"324ffed6-ee0b-4ac7-9d15-3134d408190c\") " pod="tigera-operator/tigera-operator-7dcd859c48-tmqw6" Oct 30 00:07:15.883470 kubelet[3160]: I1030 00:07:15.883464 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmwm\" (UniqueName: \"kubernetes.io/projected/324ffed6-ee0b-4ac7-9d15-3134d408190c-kube-api-access-4tmwm\") pod \"tigera-operator-7dcd859c48-tmqw6\" (UID: \"324ffed6-ee0b-4ac7-9d15-3134d408190c\") " pod="tigera-operator/tigera-operator-7dcd859c48-tmqw6" Oct 30 00:07:16.110465 containerd[1697]: time="2025-10-30T00:07:16.110392377Z" level=info msg="connecting to shim 0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc" address="unix:///run/containerd/s/26e5f61cd297351e332cd9177efcda761261492a623e7d606f35eded983d80da" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:16.111156 containerd[1697]: time="2025-10-30T00:07:16.111031709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tmqw6,Uid:324ffed6-ee0b-4ac7-9d15-3134d408190c,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:07:16.140451 systemd[1]: Started cri-containerd-0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc.scope - libcontainer container 0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc. Oct 30 00:07:16.242618 containerd[1697]: time="2025-10-30T00:07:16.242591863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq2kq,Uid:d4077c1e-cfe9-48b5-9233-2c64bdb16f1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc\"" Oct 30 00:07:16.352771 containerd[1697]: time="2025-10-30T00:07:16.352737353Z" level=info msg="CreateContainer within sandbox \"0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:07:16.663307 containerd[1697]: time="2025-10-30T00:07:16.663256442Z" level=info msg="connecting to shim 3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21" address="unix:///run/containerd/s/b02c2df2f89b555431d2e21616063f6995ff74a268c054a642150c3fb15ebc1c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:16.681442 systemd[1]: Started cri-containerd-3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21.scope - libcontainer container 3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21. Oct 30 00:07:16.706137 containerd[1697]: time="2025-10-30T00:07:16.706107214Z" level=info msg="Container 9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:16.707698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007168393.mount: Deactivated successfully. Oct 30 00:07:16.798416 containerd[1697]: time="2025-10-30T00:07:16.798394885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tmqw6,Uid:324ffed6-ee0b-4ac7-9d15-3134d408190c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21\"" Oct 30 00:07:16.799464 containerd[1697]: time="2025-10-30T00:07:16.799412363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:07:16.893725 containerd[1697]: time="2025-10-30T00:07:16.893697244Z" level=info msg="CreateContainer within sandbox \"0076a7285a8e4c3d748f2af6c86ed478d0c11afe7902f34455744659f4710acc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34\"" Oct 30 00:07:16.895242 containerd[1697]: time="2025-10-30T00:07:16.894445864Z" level=info msg="StartContainer for \"9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34\"" Oct 30 00:07:16.896151 containerd[1697]: time="2025-10-30T00:07:16.896129368Z" level=info msg="connecting to shim 9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34" address="unix:///run/containerd/s/26e5f61cd297351e332cd9177efcda761261492a623e7d606f35eded983d80da" protocol=ttrpc version=3 Oct 30 00:07:16.913417 systemd[1]: Started cri-containerd-9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34.scope - libcontainer container 9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34. Oct 30 00:07:16.944312 containerd[1697]: time="2025-10-30T00:07:16.944290994Z" level=info msg="StartContainer for \"9460296b1d69cf17be0f9a95985b949834d9550d03e700f710ac59ac3b2e1f34\" returns successfully" Oct 30 00:07:16.992802 kubelet[3160]: I1030 00:07:16.992677 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wq2kq" podStartSLOduration=3.992662574 podStartE2EDuration="3.992662574s" podCreationTimestamp="2025-10-30 00:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:16.992527022 +0000 UTC m=+10.144461276" watchObservedRunningTime="2025-10-30 00:07:16.992662574 +0000 UTC m=+10.144596812" Oct 30 00:07:19.033367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094098180.mount: Deactivated successfully. Oct 30 00:07:22.105208 containerd[1697]: time="2025-10-30T00:07:22.105167033Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:22.148309 containerd[1697]: time="2025-10-30T00:07:22.148218320Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:07:22.151955 containerd[1697]: time="2025-10-30T00:07:22.151185881Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:22.195982 containerd[1697]: time="2025-10-30T00:07:22.195950888Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:22.196561 containerd[1697]: time="2025-10-30T00:07:22.196541066Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.397103579s" Oct 30 00:07:22.196614 containerd[1697]: time="2025-10-30T00:07:22.196567401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:07:22.241599 containerd[1697]: time="2025-10-30T00:07:22.241562817Z" level=info msg="CreateContainer within sandbox \"3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:07:22.406575 containerd[1697]: time="2025-10-30T00:07:22.406514048Z" level=info msg="Container 63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:22.510549 containerd[1697]: time="2025-10-30T00:07:22.510522623Z" level=info msg="CreateContainer within sandbox \"3a37c861b83fe02d937f6cb43e309c39e15876705c7abed70e3786a4872d7f21\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472\"" Oct 30 00:07:22.511018 containerd[1697]: time="2025-10-30T00:07:22.510999855Z" level=info msg="StartContainer for \"63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472\"" Oct 30 00:07:22.511782 containerd[1697]: time="2025-10-30T00:07:22.511747520Z" level=info msg="connecting to shim 63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472" address="unix:///run/containerd/s/b02c2df2f89b555431d2e21616063f6995ff74a268c054a642150c3fb15ebc1c" protocol=ttrpc version=3 Oct 30 00:07:22.530416 systemd[1]: Started cri-containerd-63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472.scope - libcontainer container 63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472. Oct 30 00:07:22.560004 containerd[1697]: time="2025-10-30T00:07:22.559983047Z" level=info msg="StartContainer for \"63d297eb5b10074ab9470def40717422928d810e668c0f1774c9d9fc98a3e472\" returns successfully" Oct 30 00:07:27.782454 sudo[2151]: pam_unix(sudo:session): session closed for user root Oct 30 00:07:27.897016 sshd[2150]: Connection closed by 10.200.16.10 port 60998 Oct 30 00:07:27.897409 sshd-session[2147]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:27.904093 systemd[1]: sshd@6-10.200.8.44:22-10.200.16.10:60998.service: Deactivated successfully. Oct 30 00:07:27.907148 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:07:27.908156 systemd[1]: session-9.scope: Consumed 3.919s CPU time, 233.1M memory peak. Oct 30 00:07:27.911292 systemd-logind[1672]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:07:27.913572 systemd-logind[1672]: Removed session 9. Oct 30 00:07:31.912377 kubelet[3160]: I1030 00:07:31.912196 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tmqw6" podStartSLOduration=13.513953336 podStartE2EDuration="18.912183193s" podCreationTimestamp="2025-10-30 00:07:13 +0000 UTC" firstStartedPulling="2025-10-30 00:07:16.799117134 +0000 UTC m=+9.951051373" lastFinishedPulling="2025-10-30 00:07:22.197346987 +0000 UTC m=+15.349281230" observedRunningTime="2025-10-30 00:07:23.003623333 +0000 UTC m=+16.155557571" watchObservedRunningTime="2025-10-30 00:07:31.912183193 +0000 UTC m=+25.064117429" Oct 30 00:07:31.929340 systemd[1]: Created slice kubepods-besteffort-podf250fc01_89b1_49a5_acea_28a6a301c856.slice - libcontainer container kubepods-besteffort-podf250fc01_89b1_49a5_acea_28a6a301c856.slice. Oct 30 00:07:31.985305 kubelet[3160]: I1030 00:07:31.985258 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f250fc01-89b1-49a5-acea-28a6a301c856-typha-certs\") pod \"calico-typha-b58cfc5f5-flgbg\" (UID: \"f250fc01-89b1-49a5-acea-28a6a301c856\") " pod="calico-system/calico-typha-b58cfc5f5-flgbg" Oct 30 00:07:31.985305 kubelet[3160]: I1030 00:07:31.985295 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdt9w\" (UniqueName: \"kubernetes.io/projected/f250fc01-89b1-49a5-acea-28a6a301c856-kube-api-access-mdt9w\") pod \"calico-typha-b58cfc5f5-flgbg\" (UID: \"f250fc01-89b1-49a5-acea-28a6a301c856\") " pod="calico-system/calico-typha-b58cfc5f5-flgbg" Oct 30 00:07:31.985410 kubelet[3160]: I1030 00:07:31.985314 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f250fc01-89b1-49a5-acea-28a6a301c856-tigera-ca-bundle\") pod \"calico-typha-b58cfc5f5-flgbg\" (UID: \"f250fc01-89b1-49a5-acea-28a6a301c856\") " pod="calico-system/calico-typha-b58cfc5f5-flgbg" Oct 30 00:07:32.160134 systemd[1]: Created slice kubepods-besteffort-pod5de9149a_2978_46d7_bd2f_1a0a7fb27038.slice - libcontainer container kubepods-besteffort-pod5de9149a_2978_46d7_bd2f_1a0a7fb27038.slice. Oct 30 00:07:32.186752 kubelet[3160]: I1030 00:07:32.186681 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-flexvol-driver-host\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186752 kubelet[3160]: I1030 00:07:32.186708 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5de9149a-2978-46d7-bd2f-1a0a7fb27038-node-certs\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186752 kubelet[3160]: I1030 00:07:32.186730 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-var-lib-calico\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186752 kubelet[3160]: I1030 00:07:32.186748 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-cni-log-dir\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186882 kubelet[3160]: I1030 00:07:32.186763 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-cni-bin-dir\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186882 kubelet[3160]: I1030 00:07:32.186777 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5de9149a-2978-46d7-bd2f-1a0a7fb27038-tigera-ca-bundle\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186882 kubelet[3160]: I1030 00:07:32.186798 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-cni-net-dir\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186882 kubelet[3160]: I1030 00:07:32.186816 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-lib-modules\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186882 kubelet[3160]: I1030 00:07:32.186828 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-policysync\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186993 kubelet[3160]: I1030 00:07:32.186847 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-var-run-calico\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186993 kubelet[3160]: I1030 00:07:32.186862 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de9149a-2978-46d7-bd2f-1a0a7fb27038-xtables-lock\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.186993 kubelet[3160]: I1030 00:07:32.186877 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786x7\" (UniqueName: \"kubernetes.io/projected/5de9149a-2978-46d7-bd2f-1a0a7fb27038-kube-api-access-786x7\") pod \"calico-node-vgdj5\" (UID: \"5de9149a-2978-46d7-bd2f-1a0a7fb27038\") " pod="calico-system/calico-node-vgdj5" Oct 30 00:07:32.232755 containerd[1697]: time="2025-10-30T00:07:32.232713059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b58cfc5f5-flgbg,Uid:f250fc01-89b1-49a5-acea-28a6a301c856,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:32.284455 containerd[1697]: time="2025-10-30T00:07:32.284406840Z" level=info msg="connecting to shim cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d" address="unix:///run/containerd/s/7fb24975448ea6018c5e0da1041d04b52a9d147bb30a0ed17b87cd44840ca95a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:32.288654 kubelet[3160]: E1030 00:07:32.288639 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.288814 kubelet[3160]: W1030 00:07:32.288696 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.288814 kubelet[3160]: E1030 00:07:32.288717 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.289458 kubelet[3160]: E1030 00:07:32.289439 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.289458 kubelet[3160]: W1030 00:07:32.289458 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.289879 kubelet[3160]: E1030 00:07:32.289718 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.290349 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.291723 kubelet[3160]: W1030 00:07:32.290363 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.290510 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.291428 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.291723 kubelet[3160]: W1030 00:07:32.291440 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.291468 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.291589 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.291723 kubelet[3160]: W1030 00:07:32.291594 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.291723 kubelet[3160]: E1030 00:07:32.291601 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.291936 kubelet[3160]: E1030 00:07:32.291734 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.291936 kubelet[3160]: W1030 00:07:32.291739 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.291936 kubelet[3160]: E1030 00:07:32.291744 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292479 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.292928 kubelet[3160]: W1030 00:07:32.292495 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292508 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292698 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.292928 kubelet[3160]: W1030 00:07:32.292704 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292712 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292812 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.292928 kubelet[3160]: W1030 00:07:32.292835 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.292928 kubelet[3160]: E1030 00:07:32.292842 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.293162 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.295640 kubelet[3160]: W1030 00:07:32.293169 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.293178 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.294377 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.295640 kubelet[3160]: W1030 00:07:32.294387 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.294398 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.294563 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.295640 kubelet[3160]: W1030 00:07:32.294568 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.294575 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.295640 kubelet[3160]: E1030 00:07:32.294691 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.296090 kubelet[3160]: W1030 00:07:32.294696 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.296090 kubelet[3160]: E1030 00:07:32.294713 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.306078 kubelet[3160]: E1030 00:07:32.305978 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.306078 kubelet[3160]: W1030 00:07:32.305991 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.306078 kubelet[3160]: E1030 00:07:32.306002 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.311301 kubelet[3160]: E1030 00:07:32.310347 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.311301 kubelet[3160]: W1030 00:07:32.310359 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.311301 kubelet[3160]: E1030 00:07:32.310372 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.323450 systemd[1]: Started cri-containerd-cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d.scope - libcontainer container cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d. Oct 30 00:07:32.365257 kubelet[3160]: E1030 00:07:32.365234 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:32.375160 containerd[1697]: time="2025-10-30T00:07:32.375142677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b58cfc5f5-flgbg,Uid:f250fc01-89b1-49a5-acea-28a6a301c856,Namespace:calico-system,Attempt:0,} returns sandbox id \"cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d\"" Oct 30 00:07:32.376470 containerd[1697]: time="2025-10-30T00:07:32.376447207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:07:32.384065 kubelet[3160]: E1030 00:07:32.384049 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384065 kubelet[3160]: W1030 00:07:32.384063 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384144 kubelet[3160]: E1030 00:07:32.384074 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384170 kubelet[3160]: E1030 00:07:32.384164 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384170 kubelet[3160]: W1030 00:07:32.384168 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384210 kubelet[3160]: E1030 00:07:32.384175 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384257 kubelet[3160]: E1030 00:07:32.384247 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384257 kubelet[3160]: W1030 00:07:32.384255 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384313 kubelet[3160]: E1030 00:07:32.384261 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384397 kubelet[3160]: E1030 00:07:32.384387 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384397 kubelet[3160]: W1030 00:07:32.384394 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384439 kubelet[3160]: E1030 00:07:32.384400 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384494 kubelet[3160]: E1030 00:07:32.384485 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384494 kubelet[3160]: W1030 00:07:32.384492 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384535 kubelet[3160]: E1030 00:07:32.384497 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384575 kubelet[3160]: E1030 00:07:32.384567 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384575 kubelet[3160]: W1030 00:07:32.384573 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384615 kubelet[3160]: E1030 00:07:32.384578 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384655 kubelet[3160]: E1030 00:07:32.384647 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384655 kubelet[3160]: W1030 00:07:32.384653 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384694 kubelet[3160]: E1030 00:07:32.384658 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384739 kubelet[3160]: E1030 00:07:32.384730 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384739 kubelet[3160]: W1030 00:07:32.384736 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384780 kubelet[3160]: E1030 00:07:32.384742 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384821 kubelet[3160]: E1030 00:07:32.384814 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384842 kubelet[3160]: W1030 00:07:32.384826 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384842 kubelet[3160]: E1030 00:07:32.384831 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.384913 kubelet[3160]: E1030 00:07:32.384904 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.384913 kubelet[3160]: W1030 00:07:32.384910 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.384955 kubelet[3160]: E1030 00:07:32.384915 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385012 kubelet[3160]: E1030 00:07:32.385004 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385012 kubelet[3160]: W1030 00:07:32.385010 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385054 kubelet[3160]: E1030 00:07:32.385015 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385113 kubelet[3160]: E1030 00:07:32.385105 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385113 kubelet[3160]: W1030 00:07:32.385111 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385153 kubelet[3160]: E1030 00:07:32.385117 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385205 kubelet[3160]: E1030 00:07:32.385197 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385205 kubelet[3160]: W1030 00:07:32.385203 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385246 kubelet[3160]: E1030 00:07:32.385208 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385305 kubelet[3160]: E1030 00:07:32.385291 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385305 kubelet[3160]: W1030 00:07:32.385302 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385350 kubelet[3160]: E1030 00:07:32.385307 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385389 kubelet[3160]: E1030 00:07:32.385379 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385389 kubelet[3160]: W1030 00:07:32.385385 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385430 kubelet[3160]: E1030 00:07:32.385390 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385469 kubelet[3160]: E1030 00:07:32.385460 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385469 kubelet[3160]: W1030 00:07:32.385466 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385507 kubelet[3160]: E1030 00:07:32.385472 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385555 kubelet[3160]: E1030 00:07:32.385547 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385555 kubelet[3160]: W1030 00:07:32.385552 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385604 kubelet[3160]: E1030 00:07:32.385557 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385632 kubelet[3160]: E1030 00:07:32.385626 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385632 kubelet[3160]: W1030 00:07:32.385631 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385672 kubelet[3160]: E1030 00:07:32.385636 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385709 kubelet[3160]: E1030 00:07:32.385701 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385709 kubelet[3160]: W1030 00:07:32.385706 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385749 kubelet[3160]: E1030 00:07:32.385711 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.385786 kubelet[3160]: E1030 00:07:32.385779 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.385786 kubelet[3160]: W1030 00:07:32.385785 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.385822 kubelet[3160]: E1030 00:07:32.385790 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388045 kubelet[3160]: E1030 00:07:32.388031 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388045 kubelet[3160]: W1030 00:07:32.388042 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388127 kubelet[3160]: E1030 00:07:32.388052 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388127 kubelet[3160]: I1030 00:07:32.388072 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2xf\" (UniqueName: \"kubernetes.io/projected/a3b96faf-6434-4c32-bdb2-a83d279f75ef-kube-api-access-fr2xf\") pod \"csi-node-driver-fgwld\" (UID: \"a3b96faf-6434-4c32-bdb2-a83d279f75ef\") " pod="calico-system/csi-node-driver-fgwld" Oct 30 00:07:32.388200 kubelet[3160]: E1030 00:07:32.388189 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388200 kubelet[3160]: W1030 00:07:32.388197 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388242 kubelet[3160]: E1030 00:07:32.388205 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388242 kubelet[3160]: I1030 00:07:32.388224 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a3b96faf-6434-4c32-bdb2-a83d279f75ef-varrun\") pod \"csi-node-driver-fgwld\" (UID: \"a3b96faf-6434-4c32-bdb2-a83d279f75ef\") " pod="calico-system/csi-node-driver-fgwld" Oct 30 00:07:32.388367 kubelet[3160]: E1030 00:07:32.388356 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388367 kubelet[3160]: W1030 00:07:32.388365 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388411 kubelet[3160]: E1030 00:07:32.388372 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388411 kubelet[3160]: I1030 00:07:32.388390 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b96faf-6434-4c32-bdb2-a83d279f75ef-kubelet-dir\") pod \"csi-node-driver-fgwld\" (UID: \"a3b96faf-6434-4c32-bdb2-a83d279f75ef\") " pod="calico-system/csi-node-driver-fgwld" Oct 30 00:07:32.388496 kubelet[3160]: E1030 00:07:32.388487 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388526 kubelet[3160]: W1030 00:07:32.388493 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388549 kubelet[3160]: E1030 00:07:32.388526 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388549 kubelet[3160]: I1030 00:07:32.388545 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3b96faf-6434-4c32-bdb2-a83d279f75ef-socket-dir\") pod \"csi-node-driver-fgwld\" (UID: \"a3b96faf-6434-4c32-bdb2-a83d279f75ef\") " pod="calico-system/csi-node-driver-fgwld" Oct 30 00:07:32.388650 kubelet[3160]: E1030 00:07:32.388641 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388683 kubelet[3160]: W1030 00:07:32.388648 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388683 kubelet[3160]: E1030 00:07:32.388666 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388723 kubelet[3160]: I1030 00:07:32.388684 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3b96faf-6434-4c32-bdb2-a83d279f75ef-registration-dir\") pod \"csi-node-driver-fgwld\" (UID: \"a3b96faf-6434-4c32-bdb2-a83d279f75ef\") " pod="calico-system/csi-node-driver-fgwld" Oct 30 00:07:32.388814 kubelet[3160]: E1030 00:07:32.388805 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388814 kubelet[3160]: W1030 00:07:32.388812 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388860 kubelet[3160]: E1030 00:07:32.388819 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.388923 kubelet[3160]: E1030 00:07:32.388914 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.388923 kubelet[3160]: W1030 00:07:32.388920 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.388968 kubelet[3160]: E1030 00:07:32.388926 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389023 kubelet[3160]: E1030 00:07:32.389014 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389023 kubelet[3160]: W1030 00:07:32.389020 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389062 kubelet[3160]: E1030 00:07:32.389025 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389123 kubelet[3160]: E1030 00:07:32.389114 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389123 kubelet[3160]: W1030 00:07:32.389120 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389168 kubelet[3160]: E1030 00:07:32.389125 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389231 kubelet[3160]: E1030 00:07:32.389222 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389231 kubelet[3160]: W1030 00:07:32.389229 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389269 kubelet[3160]: E1030 00:07:32.389235 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389375 kubelet[3160]: E1030 00:07:32.389366 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389375 kubelet[3160]: W1030 00:07:32.389373 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389429 kubelet[3160]: E1030 00:07:32.389390 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389493 kubelet[3160]: E1030 00:07:32.389485 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389493 kubelet[3160]: W1030 00:07:32.389492 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389546 kubelet[3160]: E1030 00:07:32.389497 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389641 kubelet[3160]: E1030 00:07:32.389632 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389641 kubelet[3160]: W1030 00:07:32.389639 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389696 kubelet[3160]: E1030 00:07:32.389644 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389760 kubelet[3160]: E1030 00:07:32.389748 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389760 kubelet[3160]: W1030 00:07:32.389754 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389803 kubelet[3160]: E1030 00:07:32.389760 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.389852 kubelet[3160]: E1030 00:07:32.389838 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.389877 kubelet[3160]: W1030 00:07:32.389852 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.389877 kubelet[3160]: E1030 00:07:32.389858 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.463484 containerd[1697]: time="2025-10-30T00:07:32.463422256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgdj5,Uid:5de9149a-2978-46d7-bd2f-1a0a7fb27038,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:32.490305 kubelet[3160]: E1030 00:07:32.489955 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.490305 kubelet[3160]: W1030 00:07:32.490226 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.490305 kubelet[3160]: E1030 00:07:32.490239 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.490682 kubelet[3160]: E1030 00:07:32.490662 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.490682 kubelet[3160]: W1030 00:07:32.490680 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.490753 kubelet[3160]: E1030 00:07:32.490690 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.491310 kubelet[3160]: E1030 00:07:32.491264 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.491310 kubelet[3160]: W1030 00:07:32.491306 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.491387 kubelet[3160]: E1030 00:07:32.491316 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.491479 kubelet[3160]: E1030 00:07:32.491471 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.491505 kubelet[3160]: W1030 00:07:32.491480 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.491505 kubelet[3160]: E1030 00:07:32.491489 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.491605 kubelet[3160]: E1030 00:07:32.491597 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.491642 kubelet[3160]: W1030 00:07:32.491605 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.491642 kubelet[3160]: E1030 00:07:32.491612 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.491790 kubelet[3160]: E1030 00:07:32.491780 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.491816 kubelet[3160]: W1030 00:07:32.491804 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.491816 kubelet[3160]: E1030 00:07:32.491812 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.491920 kubelet[3160]: E1030 00:07:32.491913 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.491944 kubelet[3160]: W1030 00:07:32.491920 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.491944 kubelet[3160]: E1030 00:07:32.491927 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.492044 kubelet[3160]: E1030 00:07:32.492037 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.492069 kubelet[3160]: W1030 00:07:32.492045 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.492069 kubelet[3160]: E1030 00:07:32.492050 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.492193 kubelet[3160]: E1030 00:07:32.492166 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.492193 kubelet[3160]: W1030 00:07:32.492180 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.492193 kubelet[3160]: E1030 00:07:32.492187 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.492405 kubelet[3160]: E1030 00:07:32.492395 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.492405 kubelet[3160]: W1030 00:07:32.492405 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.492464 kubelet[3160]: E1030 00:07:32.492414 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492558 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493318 kubelet[3160]: W1030 00:07:32.492564 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492571 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492716 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493318 kubelet[3160]: W1030 00:07:32.492729 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492734 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492840 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493318 kubelet[3160]: W1030 00:07:32.492858 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492862 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493318 kubelet[3160]: E1030 00:07:32.492970 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493480 kubelet[3160]: W1030 00:07:32.492987 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493480 kubelet[3160]: E1030 00:07:32.492992 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493480 kubelet[3160]: E1030 00:07:32.493123 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493480 kubelet[3160]: W1030 00:07:32.493140 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493480 kubelet[3160]: E1030 00:07:32.493144 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493584 kubelet[3160]: E1030 00:07:32.493567 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493584 kubelet[3160]: W1030 00:07:32.493581 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493631 kubelet[3160]: E1030 00:07:32.493590 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493697 kubelet[3160]: E1030 00:07:32.493688 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493697 kubelet[3160]: W1030 00:07:32.493695 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.493743 kubelet[3160]: E1030 00:07:32.493701 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.493872 kubelet[3160]: E1030 00:07:32.493850 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.493872 kubelet[3160]: W1030 00:07:32.493870 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.494123 kubelet[3160]: E1030 00:07:32.493876 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.494766 kubelet[3160]: E1030 00:07:32.494391 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.494766 kubelet[3160]: W1030 00:07:32.494404 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.494766 kubelet[3160]: E1030 00:07:32.494420 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.495471 kubelet[3160]: E1030 00:07:32.494993 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.495577 kubelet[3160]: W1030 00:07:32.495564 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.495622 kubelet[3160]: E1030 00:07:32.495615 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.495889 kubelet[3160]: E1030 00:07:32.495858 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.496033 kubelet[3160]: W1030 00:07:32.496024 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.496094 kubelet[3160]: E1030 00:07:32.496075 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.497300 kubelet[3160]: E1030 00:07:32.496533 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.497366 kubelet[3160]: W1030 00:07:32.497356 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.497403 kubelet[3160]: E1030 00:07:32.497397 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.499726 kubelet[3160]: E1030 00:07:32.499635 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.499726 kubelet[3160]: W1030 00:07:32.499647 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.499726 kubelet[3160]: E1030 00:07:32.499658 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.500221 kubelet[3160]: E1030 00:07:32.500006 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.500221 kubelet[3160]: W1030 00:07:32.500015 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.500221 kubelet[3160]: E1030 00:07:32.500025 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.500376 kubelet[3160]: E1030 00:07:32.500369 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.500409 kubelet[3160]: W1030 00:07:32.500403 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.500442 kubelet[3160]: E1030 00:07:32.500436 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.503783 kubelet[3160]: E1030 00:07:32.503754 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:07:32.503783 kubelet[3160]: W1030 00:07:32.503779 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:07:32.503861 kubelet[3160]: E1030 00:07:32.503789 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:07:32.536266 containerd[1697]: time="2025-10-30T00:07:32.536033753Z" level=info msg="connecting to shim d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a" address="unix:///run/containerd/s/79dfd393564959a1157bcbcd8edd936debb0692a2c020569b615a803e686470b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:32.556414 systemd[1]: Started cri-containerd-d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a.scope - libcontainer container d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a. Oct 30 00:07:32.575569 containerd[1697]: time="2025-10-30T00:07:32.575550286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgdj5,Uid:5de9149a-2978-46d7-bd2f-1a0a7fb27038,Namespace:calico-system,Attempt:0,} returns sandbox id \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\"" Oct 30 00:07:33.775574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003255615.mount: Deactivated successfully. Oct 30 00:07:33.927763 kubelet[3160]: E1030 00:07:33.927724 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:35.927240 kubelet[3160]: E1030 00:07:35.927022 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:37.927477 kubelet[3160]: E1030 00:07:37.927437 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:39.927796 kubelet[3160]: E1030 00:07:39.927750 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:41.927337 kubelet[3160]: E1030 00:07:41.927263 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:43.926871 kubelet[3160]: E1030 00:07:43.926839 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:45.927566 kubelet[3160]: E1030 00:07:45.927531 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:47.927795 kubelet[3160]: E1030 00:07:47.927734 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:49.927773 kubelet[3160]: E1030 00:07:49.927739 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:51.927907 kubelet[3160]: E1030 00:07:51.927871 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:53.926898 kubelet[3160]: E1030 00:07:53.926856 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:55.926885 kubelet[3160]: E1030 00:07:55.926846 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:57.927165 kubelet[3160]: E1030 00:07:57.927094 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:07:59.927206 kubelet[3160]: E1030 00:07:59.927171 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:01.053252 containerd[1697]: time="2025-10-30T00:08:01.053213048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:01.058207 containerd[1697]: time="2025-10-30T00:08:01.058174813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:08:01.061188 containerd[1697]: time="2025-10-30T00:08:01.061146135Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:01.064451 containerd[1697]: time="2025-10-30T00:08:01.064425276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:01.064866 containerd[1697]: time="2025-10-30T00:08:01.064849224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 28.688375149s" Oct 30 00:08:01.064935 containerd[1697]: time="2025-10-30T00:08:01.064925415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:08:01.066762 containerd[1697]: time="2025-10-30T00:08:01.066099314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:08:01.085526 containerd[1697]: time="2025-10-30T00:08:01.085498508Z" level=info msg="CreateContainer within sandbox \"cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:08:01.111289 containerd[1697]: time="2025-10-30T00:08:01.110465677Z" level=info msg="Container 4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:01.113387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455278871.mount: Deactivated successfully. Oct 30 00:08:01.145490 containerd[1697]: time="2025-10-30T00:08:01.145465005Z" level=info msg="CreateContainer within sandbox \"cadc6a81327e8a8af1fa67202c91ed98995e7afa57ce819c63d55a910f1d587d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b\"" Oct 30 00:08:01.145844 containerd[1697]: time="2025-10-30T00:08:01.145813451Z" level=info msg="StartContainer for \"4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b\"" Oct 30 00:08:01.146912 containerd[1697]: time="2025-10-30T00:08:01.146863282Z" level=info msg="connecting to shim 4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b" address="unix:///run/containerd/s/7fb24975448ea6018c5e0da1041d04b52a9d147bb30a0ed17b87cd44840ca95a" protocol=ttrpc version=3 Oct 30 00:08:01.164406 systemd[1]: Started cri-containerd-4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b.scope - libcontainer container 4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b. Oct 30 00:08:01.203897 containerd[1697]: time="2025-10-30T00:08:01.203844695Z" level=info msg="StartContainer for \"4c8a5176dde89bad1611520d3c0bc57aa2c9fc038733df788e940bae3186784b\" returns successfully" Oct 30 00:08:01.927789 kubelet[3160]: E1030 00:08:01.927744 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:02.049920 kubelet[3160]: I1030 00:08:02.049464 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b58cfc5f5-flgbg" podStartSLOduration=2.359924713 podStartE2EDuration="31.049450536s" podCreationTimestamp="2025-10-30 00:07:31 +0000 UTC" firstStartedPulling="2025-10-30 00:07:32.37592005 +0000 UTC m=+25.527854276" lastFinishedPulling="2025-10-30 00:08:01.065445867 +0000 UTC m=+54.217380099" observedRunningTime="2025-10-30 00:08:02.049226788 +0000 UTC m=+55.201161026" watchObservedRunningTime="2025-10-30 00:08:02.049450536 +0000 UTC m=+55.201384776" Oct 30 00:08:02.069043 kubelet[3160]: E1030 00:08:02.069008 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069043 kubelet[3160]: W1030 00:08:02.069039 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069160 kubelet[3160]: E1030 00:08:02.069057 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069192 kubelet[3160]: E1030 00:08:02.069188 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069220 kubelet[3160]: W1030 00:08:02.069194 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069220 kubelet[3160]: E1030 00:08:02.069202 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069333 kubelet[3160]: E1030 00:08:02.069321 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069333 kubelet[3160]: W1030 00:08:02.069330 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069401 kubelet[3160]: E1030 00:08:02.069336 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069508 kubelet[3160]: E1030 00:08:02.069489 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069508 kubelet[3160]: W1030 00:08:02.069505 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069590 kubelet[3160]: E1030 00:08:02.069512 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069634 kubelet[3160]: E1030 00:08:02.069609 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069634 kubelet[3160]: W1030 00:08:02.069614 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069634 kubelet[3160]: E1030 00:08:02.069620 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069739 kubelet[3160]: E1030 00:08:02.069701 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069739 kubelet[3160]: W1030 00:08:02.069706 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069739 kubelet[3160]: E1030 00:08:02.069711 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069841 kubelet[3160]: E1030 00:08:02.069786 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069841 kubelet[3160]: W1030 00:08:02.069791 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069841 kubelet[3160]: E1030 00:08:02.069797 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.069950 kubelet[3160]: E1030 00:08:02.069876 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.069950 kubelet[3160]: W1030 00:08:02.069881 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.069950 kubelet[3160]: E1030 00:08:02.069886 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070056 kubelet[3160]: E1030 00:08:02.069968 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070056 kubelet[3160]: W1030 00:08:02.069974 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070056 kubelet[3160]: E1030 00:08:02.069979 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070056 kubelet[3160]: E1030 00:08:02.070054 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070198 kubelet[3160]: W1030 00:08:02.070058 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070198 kubelet[3160]: E1030 00:08:02.070064 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070198 kubelet[3160]: E1030 00:08:02.070150 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070198 kubelet[3160]: W1030 00:08:02.070154 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070198 kubelet[3160]: E1030 00:08:02.070160 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070406 kubelet[3160]: E1030 00:08:02.070241 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070406 kubelet[3160]: W1030 00:08:02.070245 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070406 kubelet[3160]: E1030 00:08:02.070251 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070406 kubelet[3160]: E1030 00:08:02.070350 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070406 kubelet[3160]: W1030 00:08:02.070355 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070406 kubelet[3160]: E1030 00:08:02.070361 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070942 kubelet[3160]: E1030 00:08:02.070488 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070942 kubelet[3160]: W1030 00:08:02.070493 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070942 kubelet[3160]: E1030 00:08:02.070500 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.070942 kubelet[3160]: E1030 00:08:02.070582 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.070942 kubelet[3160]: W1030 00:08:02.070586 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.070942 kubelet[3160]: E1030 00:08:02.070592 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.087641 kubelet[3160]: E1030 00:08:02.087625 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.087641 kubelet[3160]: W1030 00:08:02.087638 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.087735 kubelet[3160]: E1030 00:08:02.087651 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.087795 kubelet[3160]: E1030 00:08:02.087785 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.087818 kubelet[3160]: W1030 00:08:02.087792 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.087841 kubelet[3160]: E1030 00:08:02.087819 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.087933 kubelet[3160]: E1030 00:08:02.087923 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.087933 kubelet[3160]: W1030 00:08:02.087931 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.087987 kubelet[3160]: E1030 00:08:02.087938 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.088219 kubelet[3160]: E1030 00:08:02.088103 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.088219 kubelet[3160]: W1030 00:08:02.088128 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.088219 kubelet[3160]: E1030 00:08:02.088152 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.088379 kubelet[3160]: E1030 00:08:02.088372 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.088461 kubelet[3160]: W1030 00:08:02.088419 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.088461 kubelet[3160]: E1030 00:08:02.088429 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.088706 kubelet[3160]: E1030 00:08:02.088608 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.088706 kubelet[3160]: W1030 00:08:02.088616 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.088706 kubelet[3160]: E1030 00:08:02.088623 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.088998 kubelet[3160]: E1030 00:08:02.088970 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.088998 kubelet[3160]: W1030 00:08:02.088993 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.089074 kubelet[3160]: E1030 00:08:02.089002 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.089712 kubelet[3160]: E1030 00:08:02.089683 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.089712 kubelet[3160]: W1030 00:08:02.089706 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.089799 kubelet[3160]: E1030 00:08:02.089717 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.089960 kubelet[3160]: E1030 00:08:02.089879 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.089960 kubelet[3160]: W1030 00:08:02.089888 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.089960 kubelet[3160]: E1030 00:08:02.089896 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.090075 kubelet[3160]: E1030 00:08:02.090058 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.090075 kubelet[3160]: W1030 00:08:02.090069 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.090169 kubelet[3160]: E1030 00:08:02.090076 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.090210 kubelet[3160]: E1030 00:08:02.090180 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.090210 kubelet[3160]: W1030 00:08:02.090185 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.090210 kubelet[3160]: E1030 00:08:02.090192 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.090405 kubelet[3160]: E1030 00:08:02.090394 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.090405 kubelet[3160]: W1030 00:08:02.090402 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.090451 kubelet[3160]: E1030 00:08:02.090410 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.090549 kubelet[3160]: E1030 00:08:02.090539 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.090549 kubelet[3160]: W1030 00:08:02.090547 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.090598 kubelet[3160]: E1030 00:08:02.090554 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.090902 kubelet[3160]: E1030 00:08:02.090802 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.090902 kubelet[3160]: W1030 00:08:02.090831 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.090902 kubelet[3160]: E1030 00:08:02.090853 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.091023 kubelet[3160]: E1030 00:08:02.090997 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.091023 kubelet[3160]: W1030 00:08:02.091013 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.091023 kubelet[3160]: E1030 00:08:02.091020 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.091199 kubelet[3160]: E1030 00:08:02.091172 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.091199 kubelet[3160]: W1030 00:08:02.091193 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.091255 kubelet[3160]: E1030 00:08:02.091199 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.091429 kubelet[3160]: E1030 00:08:02.091415 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.091429 kubelet[3160]: W1030 00:08:02.091425 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.091497 kubelet[3160]: E1030 00:08:02.091433 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.091606 kubelet[3160]: E1030 00:08:02.091599 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:02.091635 kubelet[3160]: W1030 00:08:02.091606 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:02.091635 kubelet[3160]: E1030 00:08:02.091614 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:02.977756 containerd[1697]: time="2025-10-30T00:08:02.977715063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:02.980971 containerd[1697]: time="2025-10-30T00:08:02.980943877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:08:02.983378 containerd[1697]: time="2025-10-30T00:08:02.983337681Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:02.987146 containerd[1697]: time="2025-10-30T00:08:02.987029293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:02.987744 containerd[1697]: time="2025-10-30T00:08:02.987591840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.921458802s" Oct 30 00:08:02.987744 containerd[1697]: time="2025-10-30T00:08:02.987619764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:08:02.999862 containerd[1697]: time="2025-10-30T00:08:02.999812118Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:08:03.022656 containerd[1697]: time="2025-10-30T00:08:03.021123469Z" level=info msg="Container 6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:03.024919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027523447.mount: Deactivated successfully. Oct 30 00:08:03.038161 containerd[1697]: time="2025-10-30T00:08:03.038141533Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\"" Oct 30 00:08:03.038527 containerd[1697]: time="2025-10-30T00:08:03.038507180Z" level=info msg="StartContainer for \"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\"" Oct 30 00:08:03.040038 containerd[1697]: time="2025-10-30T00:08:03.039989077Z" level=info msg="connecting to shim 6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e" address="unix:///run/containerd/s/79dfd393564959a1157bcbcd8edd936debb0692a2c020569b615a803e686470b" protocol=ttrpc version=3 Oct 30 00:08:03.056399 systemd[1]: Started cri-containerd-6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e.scope - libcontainer container 6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e. Oct 30 00:08:03.077911 kubelet[3160]: E1030 00:08:03.077892 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.077911 kubelet[3160]: W1030 00:08:03.077910 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.078160 kubelet[3160]: E1030 00:08:03.077924 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.078160 kubelet[3160]: E1030 00:08:03.078059 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.078160 kubelet[3160]: W1030 00:08:03.078065 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.078160 kubelet[3160]: E1030 00:08:03.078073 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.078254 kubelet[3160]: E1030 00:08:03.078193 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.078254 kubelet[3160]: W1030 00:08:03.078198 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.078254 kubelet[3160]: E1030 00:08:03.078204 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.078339 kubelet[3160]: E1030 00:08:03.078329 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.078339 kubelet[3160]: W1030 00:08:03.078334 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.078383 kubelet[3160]: E1030 00:08:03.078341 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.078455 kubelet[3160]: E1030 00:08:03.078437 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.078455 kubelet[3160]: W1030 00:08:03.078446 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.078503 kubelet[3160]: E1030 00:08:03.078463 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078547 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079571 kubelet[3160]: W1030 00:08:03.078551 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078556 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078641 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079571 kubelet[3160]: W1030 00:08:03.078645 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078650 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078738 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079571 kubelet[3160]: W1030 00:08:03.078743 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078748 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079571 kubelet[3160]: E1030 00:08:03.078866 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079785 kubelet[3160]: W1030 00:08:03.078871 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.078876 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.078965 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079785 kubelet[3160]: W1030 00:08:03.078969 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.078974 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.079058 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079785 kubelet[3160]: W1030 00:08:03.079063 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.079068 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079785 kubelet[3160]: E1030 00:08:03.079200 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079785 kubelet[3160]: W1030 00:08:03.079205 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079211 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079335 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079921 kubelet[3160]: W1030 00:08:03.079339 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079345 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079445 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079921 kubelet[3160]: W1030 00:08:03.079450 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079456 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079575 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:08:03.079921 kubelet[3160]: W1030 00:08:03.079579 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:08:03.079921 kubelet[3160]: E1030 00:08:03.079585 3160 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:08:03.087513 systemd[1]: cri-containerd-6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e.scope: Deactivated successfully. Oct 30 00:08:03.090266 containerd[1697]: time="2025-10-30T00:08:03.090186191Z" level=info msg="received exit event container_id:\"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\" id:\"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\" pid:3853 exited_at:{seconds:1761782883 nanos:89952359}" Oct 30 00:08:03.091296 containerd[1697]: time="2025-10-30T00:08:03.091026357Z" level=info msg="StartContainer for \"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\" returns successfully" Oct 30 00:08:03.091296 containerd[1697]: time="2025-10-30T00:08:03.091058440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\" id:\"6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e\" pid:3853 exited_at:{seconds:1761782883 nanos:89952359}" Oct 30 00:08:03.106624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8624ac653c9c7c1e578470daeb139992cb0f370380c20fda4174296b11745e-rootfs.mount: Deactivated successfully. Oct 30 00:08:03.927436 kubelet[3160]: E1030 00:08:03.927396 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:04.051306 containerd[1697]: time="2025-10-30T00:08:04.050613072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:08:05.927904 kubelet[3160]: E1030 00:08:05.927863 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:06.879596 containerd[1697]: time="2025-10-30T00:08:06.879559477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:06.882084 containerd[1697]: time="2025-10-30T00:08:06.882020128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:08:06.884550 containerd[1697]: time="2025-10-30T00:08:06.884476290Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:06.888084 containerd[1697]: time="2025-10-30T00:08:06.888055937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:06.888490 containerd[1697]: time="2025-10-30T00:08:06.888469776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.837802135s" Oct 30 00:08:06.888558 containerd[1697]: time="2025-10-30T00:08:06.888546891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:08:06.897857 containerd[1697]: time="2025-10-30T00:08:06.897830526Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:08:06.918425 containerd[1697]: time="2025-10-30T00:08:06.917336873Z" level=info msg="Container 083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:06.935029 containerd[1697]: time="2025-10-30T00:08:06.935006441Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\"" Oct 30 00:08:06.936303 containerd[1697]: time="2025-10-30T00:08:06.936267564Z" level=info msg="StartContainer for \"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\"" Oct 30 00:08:06.937303 containerd[1697]: time="2025-10-30T00:08:06.937252323Z" level=info msg="connecting to shim 083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da" address="unix:///run/containerd/s/79dfd393564959a1157bcbcd8edd936debb0692a2c020569b615a803e686470b" protocol=ttrpc version=3 Oct 30 00:08:06.959414 systemd[1]: Started cri-containerd-083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da.scope - libcontainer container 083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da. Oct 30 00:08:06.995014 containerd[1697]: time="2025-10-30T00:08:06.994583561Z" level=info msg="StartContainer for \"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\" returns successfully" Oct 30 00:08:07.927741 kubelet[3160]: E1030 00:08:07.927710 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:08.217847 containerd[1697]: time="2025-10-30T00:08:08.217684767Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:08:08.219520 systemd[1]: cri-containerd-083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da.scope: Deactivated successfully. Oct 30 00:08:08.219753 systemd[1]: cri-containerd-083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da.scope: Consumed 373ms CPU time, 196.1M memory peak, 171.3M written to disk. Oct 30 00:08:08.220811 containerd[1697]: time="2025-10-30T00:08:08.220423515Z" level=info msg="received exit event container_id:\"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\" id:\"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\" pid:3926 exited_at:{seconds:1761782888 nanos:220162136}" Oct 30 00:08:08.221082 containerd[1697]: time="2025-10-30T00:08:08.220986497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\" id:\"083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da\" pid:3926 exited_at:{seconds:1761782888 nanos:220162136}" Oct 30 00:08:08.237681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083e7868485dbb77625c487fcba4331c067eabbd4e5b19f33feb57c89bfbe7da-rootfs.mount: Deactivated successfully. Oct 30 00:08:08.270658 kubelet[3160]: I1030 00:08:08.270643 3160 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:08:08.522310 systemd[1]: Created slice kubepods-besteffort-podfc436dc2_0f71_481b_9a03_aea4931c7123.slice - libcontainer container kubepods-besteffort-podfc436dc2_0f71_481b_9a03_aea4931c7123.slice. Oct 30 00:08:08.532409 kubelet[3160]: I1030 00:08:08.532383 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-ca-bundle\") pod \"whisker-b4884fc94-zxvrg\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " pod="calico-system/whisker-b4884fc94-zxvrg" Oct 30 00:08:08.532494 kubelet[3160]: I1030 00:08:08.532420 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-backend-key-pair\") pod \"whisker-b4884fc94-zxvrg\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " pod="calico-system/whisker-b4884fc94-zxvrg" Oct 30 00:08:08.532494 kubelet[3160]: I1030 00:08:08.532437 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2kh\" (UniqueName: \"kubernetes.io/projected/fc436dc2-0f71-481b-9a03-aea4931c7123-kube-api-access-zk2kh\") pod \"whisker-b4884fc94-zxvrg\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " pod="calico-system/whisker-b4884fc94-zxvrg" Oct 30 00:08:08.733380 kubelet[3160]: I1030 00:08:08.733333 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7023f33-bcd5-455f-bb39-ef094539fe80-calico-apiserver-certs\") pod \"calico-apiserver-d4dc65c88-44scr\" (UID: \"b7023f33-bcd5-455f-bb39-ef094539fe80\") " pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" Oct 30 00:08:08.733380 kubelet[3160]: I1030 00:08:08.733385 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgblb\" (UniqueName: \"kubernetes.io/projected/b7023f33-bcd5-455f-bb39-ef094539fe80-kube-api-access-fgblb\") pod \"calico-apiserver-d4dc65c88-44scr\" (UID: \"b7023f33-bcd5-455f-bb39-ef094539fe80\") " pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" Oct 30 00:08:08.851441 containerd[1697]: time="2025-10-30T00:08:08.824976739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b4884fc94-zxvrg,Uid:fc436dc2-0f71-481b-9a03-aea4931c7123,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:08.851525 kubelet[3160]: E1030 00:08:08.834214 3160 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: object "calico-apiserver"/"calico-apiserver-certs" not registered Oct 30 00:08:08.851525 kubelet[3160]: E1030 00:08:08.834299 3160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7023f33-bcd5-455f-bb39-ef094539fe80-calico-apiserver-certs podName:b7023f33-bcd5-455f-bb39-ef094539fe80 nodeName:}" failed. No retries permitted until 2025-10-30 00:08:09.334264311 +0000 UTC m=+62.486198549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/b7023f33-bcd5-455f-bb39-ef094539fe80-calico-apiserver-certs") pod "calico-apiserver-d4dc65c88-44scr" (UID: "b7023f33-bcd5-455f-bb39-ef094539fe80") : object "calico-apiserver"/"calico-apiserver-certs" not registered Oct 30 00:08:08.851525 kubelet[3160]: E1030 00:08:08.838486 3160 projected.go:289] Couldn't get configMap calico-apiserver/kube-root-ca.crt: object "calico-apiserver"/"kube-root-ca.crt" not registered Oct 30 00:08:08.851525 kubelet[3160]: E1030 00:08:08.838501 3160 projected.go:194] Error preparing data for projected volume kube-api-access-fgblb for pod calico-apiserver/calico-apiserver-d4dc65c88-44scr: object "calico-apiserver"/"kube-root-ca.crt" not registered Oct 30 00:08:08.851525 kubelet[3160]: E1030 00:08:08.838552 3160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7023f33-bcd5-455f-bb39-ef094539fe80-kube-api-access-fgblb podName:b7023f33-bcd5-455f-bb39-ef094539fe80 nodeName:}" failed. No retries permitted until 2025-10-30 00:08:09.33852781 +0000 UTC m=+62.490462047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fgblb" (UniqueName: "kubernetes.io/projected/b7023f33-bcd5-455f-bb39-ef094539fe80-kube-api-access-fgblb") pod "calico-apiserver-d4dc65c88-44scr" (UID: "b7023f33-bcd5-455f-bb39-ef094539fe80") : object "calico-apiserver"/"kube-root-ca.crt" not registered Oct 30 00:08:08.864052 systemd[1]: Created slice kubepods-besteffort-podb7023f33_bcd5_455f_bb39_ef094539fe80.slice - libcontainer container kubepods-besteffort-podb7023f33_bcd5_455f_bb39_ef094539fe80.slice. Oct 30 00:08:08.992030 kubelet[3160]: I1030 00:08:08.934759 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-key-pair\") pod \"goldmane-666569f655-dlvqm\" (UID: \"69b7796b-0241-4712-b4ee-3f03c5de49ac\") " pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:08.992030 kubelet[3160]: I1030 00:08:08.934778 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndm4g\" (UniqueName: \"kubernetes.io/projected/69b7796b-0241-4712-b4ee-3f03c5de49ac-kube-api-access-ndm4g\") pod \"goldmane-666569f655-dlvqm\" (UID: \"69b7796b-0241-4712-b4ee-3f03c5de49ac\") " pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:08.992030 kubelet[3160]: I1030 00:08:08.934790 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-config\") pod \"goldmane-666569f655-dlvqm\" (UID: \"69b7796b-0241-4712-b4ee-3f03c5de49ac\") " pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:08.992030 kubelet[3160]: I1030 00:08:08.934802 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-ca-bundle\") pod \"goldmane-666569f655-dlvqm\" (UID: \"69b7796b-0241-4712-b4ee-3f03c5de49ac\") " pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:09.035908 kubelet[3160]: E1030 00:08:09.035703 3160 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: object "calico-system"/"goldmane-ca-bundle" not registered Oct 30 00:08:09.035908 kubelet[3160]: E1030 00:08:09.035750 3160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-ca-bundle podName:69b7796b-0241-4712-b4ee-3f03c5de49ac nodeName:}" failed. No retries permitted until 2025-10-30 00:08:09.535739443 +0000 UTC m=+62.687673678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-ca-bundle") pod "goldmane-666569f655-dlvqm" (UID: "69b7796b-0241-4712-b4ee-3f03c5de49ac") : object "calico-system"/"goldmane-ca-bundle" not registered Oct 30 00:08:09.035908 kubelet[3160]: E1030 00:08:09.035808 3160 configmap.go:193] Couldn't get configMap calico-system/goldmane: object "calico-system"/"goldmane" not registered Oct 30 00:08:09.035908 kubelet[3160]: E1030 00:08:09.035828 3160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-config podName:69b7796b-0241-4712-b4ee-3f03c5de49ac nodeName:}" failed. No retries permitted until 2025-10-30 00:08:09.535821561 +0000 UTC m=+62.687755791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/69b7796b-0241-4712-b4ee-3f03c5de49ac-config") pod "goldmane-666569f655-dlvqm" (UID: "69b7796b-0241-4712-b4ee-3f03c5de49ac") : object "calico-system"/"goldmane" not registered Oct 30 00:08:09.035908 kubelet[3160]: E1030 00:08:09.035702 3160 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: object "calico-system"/"goldmane-key-pair" not registered Oct 30 00:08:09.036316 kubelet[3160]: E1030 00:08:09.035849 3160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-key-pair podName:69b7796b-0241-4712-b4ee-3f03c5de49ac nodeName:}" failed. No retries permitted until 2025-10-30 00:08:09.535843195 +0000 UTC m=+62.687777429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/69b7796b-0241-4712-b4ee-3f03c5de49ac-goldmane-key-pair") pod "goldmane-666569f655-dlvqm" (UID: "69b7796b-0241-4712-b4ee-3f03c5de49ac") : object "calico-system"/"goldmane-key-pair" not registered Oct 30 00:08:09.065348 systemd[1]: Created slice kubepods-besteffort-pod69b7796b_0241_4712_b4ee_3f03c5de49ac.slice - libcontainer container kubepods-besteffort-pod69b7796b_0241_4712_b4ee_3f03c5de49ac.slice. Oct 30 00:08:09.070771 systemd[1]: Created slice kubepods-burstable-pode474ec4d_2a3f_4853_a5fa_0c20bb4f628a.slice - libcontainer container kubepods-burstable-pode474ec4d_2a3f_4853_a5fa_0c20bb4f628a.slice. Oct 30 00:08:09.136859 kubelet[3160]: I1030 00:08:09.136795 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e474ec4d-2a3f-4853-a5fa-0c20bb4f628a-config-volume\") pod \"coredns-674b8bbfcf-sr5np\" (UID: \"e474ec4d-2a3f-4853-a5fa-0c20bb4f628a\") " pod="kube-system/coredns-674b8bbfcf-sr5np" Oct 30 00:08:09.136859 kubelet[3160]: I1030 00:08:09.136857 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msr8w\" (UniqueName: \"kubernetes.io/projected/e474ec4d-2a3f-4853-a5fa-0c20bb4f628a-kube-api-access-msr8w\") pod \"coredns-674b8bbfcf-sr5np\" (UID: \"e474ec4d-2a3f-4853-a5fa-0c20bb4f628a\") " pod="kube-system/coredns-674b8bbfcf-sr5np" Oct 30 00:08:09.165726 systemd[1]: Created slice kubepods-besteffort-podedd2e4ea_71b9_4fa2_9387_fc17f5b6fe6d.slice - libcontainer container kubepods-besteffort-podedd2e4ea_71b9_4fa2_9387_fc17f5b6fe6d.slice. Oct 30 00:08:09.174995 systemd[1]: Created slice kubepods-burstable-pod125a213b_c54a_4db2_bdd4_c80c7c20641e.slice - libcontainer container kubepods-burstable-pod125a213b_c54a_4db2_bdd4_c80c7c20641e.slice. Oct 30 00:08:09.184573 systemd[1]: Created slice kubepods-besteffort-pod4462c777_3a7c_4ea5_8cfd_9b0d8e8807cf.slice - libcontainer container kubepods-besteffort-pod4462c777_3a7c_4ea5_8cfd_9b0d8e8807cf.slice. Oct 30 00:08:09.194237 systemd[1]: Created slice kubepods-besteffort-poda3b96faf_6434_4c32_bdb2_a83d279f75ef.slice - libcontainer container kubepods-besteffort-poda3b96faf_6434_4c32_bdb2_a83d279f75ef.slice. Oct 30 00:08:09.198010 containerd[1697]: time="2025-10-30T00:08:09.197981218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgwld,Uid:a3b96faf-6434-4c32-bdb2-a83d279f75ef,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:09.217217 containerd[1697]: time="2025-10-30T00:08:09.217186405Z" level=error msg="Failed to destroy network for sandbox \"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.220620 containerd[1697]: time="2025-10-30T00:08:09.220495246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b4884fc94-zxvrg,Uid:fc436dc2-0f71-481b-9a03-aea4931c7123,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.220869 kubelet[3160]: E1030 00:08:09.220654 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.220869 kubelet[3160]: E1030 00:08:09.220702 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b4884fc94-zxvrg" Oct 30 00:08:09.220869 kubelet[3160]: E1030 00:08:09.220718 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b4884fc94-zxvrg" Oct 30 00:08:09.220949 kubelet[3160]: E1030 00:08:09.220755 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b4884fc94-zxvrg_calico-system(fc436dc2-0f71-481b-9a03-aea4931c7123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b4884fc94-zxvrg_calico-system(fc436dc2-0f71-481b-9a03-aea4931c7123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f66246c237e82d9e6c60d38efa753ab4071d0e61dcc54b8d1e89d4f550827e05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b4884fc94-zxvrg" podUID="fc436dc2-0f71-481b-9a03-aea4931c7123" Oct 30 00:08:09.237469 kubelet[3160]: I1030 00:08:09.237161 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf-calico-apiserver-certs\") pod \"calico-apiserver-d4dc65c88-vhhsm\" (UID: \"4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf\") " pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" Oct 30 00:08:09.240640 systemd[1]: run-netns-cni\x2d1bb46aff\x2da675\x2dcbf1\x2dfba3\x2d7c6e3aebfe9d.mount: Deactivated successfully. Oct 30 00:08:09.242978 kubelet[3160]: I1030 00:08:09.242518 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d-tigera-ca-bundle\") pod \"calico-kube-controllers-ffb6d876d-8qgfk\" (UID: \"edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d\") " pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" Oct 30 00:08:09.242978 kubelet[3160]: I1030 00:08:09.242546 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/125a213b-c54a-4db2-bdd4-c80c7c20641e-config-volume\") pod \"coredns-674b8bbfcf-ffnpb\" (UID: \"125a213b-c54a-4db2-bdd4-c80c7c20641e\") " pod="kube-system/coredns-674b8bbfcf-ffnpb" Oct 30 00:08:09.242978 kubelet[3160]: I1030 00:08:09.242567 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5pb\" (UniqueName: \"kubernetes.io/projected/125a213b-c54a-4db2-bdd4-c80c7c20641e-kube-api-access-cx5pb\") pod \"coredns-674b8bbfcf-ffnpb\" (UID: \"125a213b-c54a-4db2-bdd4-c80c7c20641e\") " pod="kube-system/coredns-674b8bbfcf-ffnpb" Oct 30 00:08:09.242978 kubelet[3160]: I1030 00:08:09.242583 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz2q\" (UniqueName: \"kubernetes.io/projected/edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d-kube-api-access-6wz2q\") pod \"calico-kube-controllers-ffb6d876d-8qgfk\" (UID: \"edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d\") " pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" Oct 30 00:08:09.242978 kubelet[3160]: I1030 00:08:09.242618 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbqx\" (UniqueName: \"kubernetes.io/projected/4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf-kube-api-access-bzbqx\") pod \"calico-apiserver-d4dc65c88-vhhsm\" (UID: \"4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf\") " pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" Oct 30 00:08:09.246012 containerd[1697]: time="2025-10-30T00:08:09.245981076Z" level=error msg="Failed to destroy network for sandbox \"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.249090 systemd[1]: run-netns-cni\x2d7da8e1ce\x2d26d9\x2d2b4b\x2dc025\x2d130956efff42.mount: Deactivated successfully. Oct 30 00:08:09.252059 containerd[1697]: time="2025-10-30T00:08:09.252021065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgwld,Uid:a3b96faf-6434-4c32-bdb2-a83d279f75ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.252781 kubelet[3160]: E1030 00:08:09.252648 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.253222 kubelet[3160]: E1030 00:08:09.253205 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgwld" Oct 30 00:08:09.253300 kubelet[3160]: E1030 00:08:09.253288 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgwld" Oct 30 00:08:09.253382 kubelet[3160]: E1030 00:08:09.253364 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57f553c9ecad37c21065dafff56982a61dca5832558be515a7cfcc92e2bee225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:09.373350 containerd[1697]: time="2025-10-30T00:08:09.373319767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr5np,Uid:e474ec4d-2a3f-4853-a5fa-0c20bb4f628a,Namespace:kube-system,Attempt:0,}" Oct 30 00:08:09.408611 containerd[1697]: time="2025-10-30T00:08:09.408505165Z" level=error msg="Failed to destroy network for sandbox \"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.412359 containerd[1697]: time="2025-10-30T00:08:09.412333136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr5np,Uid:e474ec4d-2a3f-4853-a5fa-0c20bb4f628a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.412770 kubelet[3160]: E1030 00:08:09.412461 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.412770 kubelet[3160]: E1030 00:08:09.412499 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sr5np" Oct 30 00:08:09.412770 kubelet[3160]: E1030 00:08:09.412515 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sr5np" Oct 30 00:08:09.412856 kubelet[3160]: E1030 00:08:09.412548 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sr5np_kube-system(e474ec4d-2a3f-4853-a5fa-0c20bb4f628a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sr5np_kube-system(e474ec4d-2a3f-4853-a5fa-0c20bb4f628a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"435f35c2d017757517bc23eb74bf83687dc349bb6391e0d5972b0e1eeac54835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sr5np" podUID="e474ec4d-2a3f-4853-a5fa-0c20bb4f628a" Oct 30 00:08:09.470500 containerd[1697]: time="2025-10-30T00:08:09.470466624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb6d876d-8qgfk,Uid:edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:09.485584 containerd[1697]: time="2025-10-30T00:08:09.485562002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffnpb,Uid:125a213b-c54a-4db2-bdd4-c80c7c20641e,Namespace:kube-system,Attempt:0,}" Oct 30 00:08:09.496348 containerd[1697]: time="2025-10-30T00:08:09.496317363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-vhhsm,Uid:4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:08:09.543242 containerd[1697]: time="2025-10-30T00:08:09.543210252Z" level=error msg="Failed to destroy network for sandbox \"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.547041 containerd[1697]: time="2025-10-30T00:08:09.546589924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb6d876d-8qgfk,Uid:edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.547253 kubelet[3160]: E1030 00:08:09.546737 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.547253 kubelet[3160]: E1030 00:08:09.546769 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" Oct 30 00:08:09.547253 kubelet[3160]: E1030 00:08:09.546797 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" Oct 30 00:08:09.547402 kubelet[3160]: E1030 00:08:09.546842 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c2176a1d9cdf0e08a57e847383628bff0d4bc1f101ba36e6add3ec3fe1d9e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:09.547751 containerd[1697]: time="2025-10-30T00:08:09.547606081Z" level=error msg="Failed to destroy network for sandbox \"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.552818 containerd[1697]: time="2025-10-30T00:08:09.552525524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffnpb,Uid:125a213b-c54a-4db2-bdd4-c80c7c20641e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.552912 kubelet[3160]: E1030 00:08:09.552719 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.552912 kubelet[3160]: E1030 00:08:09.552752 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffnpb" Oct 30 00:08:09.552912 kubelet[3160]: E1030 00:08:09.552784 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffnpb" Oct 30 00:08:09.552999 kubelet[3160]: E1030 00:08:09.552827 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ffnpb_kube-system(125a213b-c54a-4db2-bdd4-c80c7c20641e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ffnpb_kube-system(125a213b-c54a-4db2-bdd4-c80c7c20641e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90021a6353e97a284030c729d15ebb60c5ca076361a3563cd82ddf05de9906fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ffnpb" podUID="125a213b-c54a-4db2-bdd4-c80c7c20641e" Oct 30 00:08:09.562206 containerd[1697]: time="2025-10-30T00:08:09.562179783Z" level=error msg="Failed to destroy network for sandbox \"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.565638 containerd[1697]: time="2025-10-30T00:08:09.565615105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-vhhsm,Uid:4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.565767 kubelet[3160]: E1030 00:08:09.565745 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.565818 kubelet[3160]: E1030 00:08:09.565780 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" Oct 30 00:08:09.565818 kubelet[3160]: E1030 00:08:09.565798 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" Oct 30 00:08:09.565885 kubelet[3160]: E1030 00:08:09.565843 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62446b503806395c077a830386524424bf7b682afd8f999b4718671804ca7c8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:08:09.593500 containerd[1697]: time="2025-10-30T00:08:09.593480345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-44scr,Uid:b7023f33-bcd5-455f-bb39-ef094539fe80,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:08:09.635097 containerd[1697]: time="2025-10-30T00:08:09.635067527Z" level=error msg="Failed to destroy network for sandbox \"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.637891 containerd[1697]: time="2025-10-30T00:08:09.637865639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-44scr,Uid:b7023f33-bcd5-455f-bb39-ef094539fe80,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.638088 kubelet[3160]: E1030 00:08:09.637975 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.638130 kubelet[3160]: E1030 00:08:09.638092 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" Oct 30 00:08:09.638130 kubelet[3160]: E1030 00:08:09.638109 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" Oct 30 00:08:09.638173 kubelet[3160]: E1030 00:08:09.638161 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68ce15d0d65f84b4eb80d14aecca4175b6709051d94b1e7a047417649a850aa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:09.669404 containerd[1697]: time="2025-10-30T00:08:09.669201567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dlvqm,Uid:69b7796b-0241-4712-b4ee-3f03c5de49ac,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:09.707026 containerd[1697]: time="2025-10-30T00:08:09.706999913Z" level=error msg="Failed to destroy network for sandbox \"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.713331 containerd[1697]: time="2025-10-30T00:08:09.713297502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dlvqm,Uid:69b7796b-0241-4712-b4ee-3f03c5de49ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.713687 kubelet[3160]: E1030 00:08:09.713441 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:08:09.713687 kubelet[3160]: E1030 00:08:09.713473 3160 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:09.713687 kubelet[3160]: E1030 00:08:09.713501 3160 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dlvqm" Oct 30 00:08:09.714430 kubelet[3160]: E1030 00:08:09.713535 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"074ba2c6b1f132e4cf217fe1413d77ea5f1615e4617ba480dcde9889bb932c51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:10.067209 containerd[1697]: time="2025-10-30T00:08:10.066887958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:08:14.506062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157973827.mount: Deactivated successfully. Oct 30 00:08:14.542195 containerd[1697]: time="2025-10-30T00:08:14.542155619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:14.544463 containerd[1697]: time="2025-10-30T00:08:14.544439906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:08:14.547037 containerd[1697]: time="2025-10-30T00:08:14.547001193Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:14.552452 containerd[1697]: time="2025-10-30T00:08:14.552079921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:08:14.552452 containerd[1697]: time="2025-10-30T00:08:14.552370937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.485052551s" Oct 30 00:08:14.552452 containerd[1697]: time="2025-10-30T00:08:14.552392962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:08:14.569535 containerd[1697]: time="2025-10-30T00:08:14.569511628Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:08:14.596983 containerd[1697]: time="2025-10-30T00:08:14.595552964Z" level=info msg="Container 1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:14.613076 containerd[1697]: time="2025-10-30T00:08:14.613048729Z" level=info msg="CreateContainer within sandbox \"d996fefceef68c4d64d70ab0cb73d992cdf4bf356d40339afd9f495a63a2021a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\"" Oct 30 00:08:14.613459 containerd[1697]: time="2025-10-30T00:08:14.613438582Z" level=info msg="StartContainer for \"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\"" Oct 30 00:08:14.614977 containerd[1697]: time="2025-10-30T00:08:14.614948637Z" level=info msg="connecting to shim 1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68" address="unix:///run/containerd/s/79dfd393564959a1157bcbcd8edd936debb0692a2c020569b615a803e686470b" protocol=ttrpc version=3 Oct 30 00:08:14.633431 systemd[1]: Started cri-containerd-1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68.scope - libcontainer container 1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68. Oct 30 00:08:14.675440 containerd[1697]: time="2025-10-30T00:08:14.675421991Z" level=info msg="StartContainer for \"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" returns successfully" Oct 30 00:08:15.020371 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:08:15.020438 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:08:15.111301 kubelet[3160]: I1030 00:08:15.110880 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vgdj5" podStartSLOduration=1.134508221 podStartE2EDuration="43.110864553s" podCreationTimestamp="2025-10-30 00:07:32 +0000 UTC" firstStartedPulling="2025-10-30 00:07:32.576594402 +0000 UTC m=+25.728528634" lastFinishedPulling="2025-10-30 00:08:14.552950734 +0000 UTC m=+67.704884966" observedRunningTime="2025-10-30 00:08:15.109115795 +0000 UTC m=+68.261050030" watchObservedRunningTime="2025-10-30 00:08:15.110864553 +0000 UTC m=+68.262798791" Oct 30 00:08:15.173246 kubelet[3160]: I1030 00:08:15.173220 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2kh\" (UniqueName: \"kubernetes.io/projected/fc436dc2-0f71-481b-9a03-aea4931c7123-kube-api-access-zk2kh\") pod \"fc436dc2-0f71-481b-9a03-aea4931c7123\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " Oct 30 00:08:15.173335 kubelet[3160]: I1030 00:08:15.173262 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-ca-bundle\") pod \"fc436dc2-0f71-481b-9a03-aea4931c7123\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " Oct 30 00:08:15.173335 kubelet[3160]: I1030 00:08:15.173298 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-backend-key-pair\") pod \"fc436dc2-0f71-481b-9a03-aea4931c7123\" (UID: \"fc436dc2-0f71-481b-9a03-aea4931c7123\") " Oct 30 00:08:15.174973 kubelet[3160]: I1030 00:08:15.174934 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fc436dc2-0f71-481b-9a03-aea4931c7123" (UID: "fc436dc2-0f71-481b-9a03-aea4931c7123"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:08:15.178048 kubelet[3160]: I1030 00:08:15.178009 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc436dc2-0f71-481b-9a03-aea4931c7123-kube-api-access-zk2kh" (OuterVolumeSpecName: "kube-api-access-zk2kh") pod "fc436dc2-0f71-481b-9a03-aea4931c7123" (UID: "fc436dc2-0f71-481b-9a03-aea4931c7123"). InnerVolumeSpecName "kube-api-access-zk2kh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:08:15.178248 kubelet[3160]: I1030 00:08:15.178222 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fc436dc2-0f71-481b-9a03-aea4931c7123" (UID: "fc436dc2-0f71-481b-9a03-aea4931c7123"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:08:15.186167 containerd[1697]: time="2025-10-30T00:08:15.186062590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"b0fe24387905a157cf1f71d66383c65ea541716ce8cc53272fa089be7196dbaa\" pid:4233 exit_status:1 exited_at:{seconds:1761782895 nanos:185645548}" Oct 30 00:08:15.273875 kubelet[3160]: I1030 00:08:15.273645 3160 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-ca-bundle\") on node \"ci-4459.1.0-n-666d628454\" DevicePath \"\"" Oct 30 00:08:15.273875 kubelet[3160]: I1030 00:08:15.273669 3160 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc436dc2-0f71-481b-9a03-aea4931c7123-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-666d628454\" DevicePath \"\"" Oct 30 00:08:15.273875 kubelet[3160]: I1030 00:08:15.273680 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zk2kh\" (UniqueName: \"kubernetes.io/projected/fc436dc2-0f71-481b-9a03-aea4931c7123-kube-api-access-zk2kh\") on node \"ci-4459.1.0-n-666d628454\" DevicePath \"\"" Oct 30 00:08:15.506009 systemd[1]: var-lib-kubelet-pods-fc436dc2\x2d0f71\x2d481b\x2d9a03\x2daea4931c7123-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzk2kh.mount: Deactivated successfully. Oct 30 00:08:15.506082 systemd[1]: var-lib-kubelet-pods-fc436dc2\x2d0f71\x2d481b\x2d9a03\x2daea4931c7123-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:08:16.085462 systemd[1]: Removed slice kubepods-besteffort-podfc436dc2_0f71_481b_9a03_aea4931c7123.slice - libcontainer container kubepods-besteffort-podfc436dc2_0f71_481b_9a03_aea4931c7123.slice. Oct 30 00:08:16.168029 systemd[1]: Created slice kubepods-besteffort-podb8c2d711_e3d2_49d2_9ce4_f8ddd389b734.slice - libcontainer container kubepods-besteffort-podb8c2d711_e3d2_49d2_9ce4_f8ddd389b734.slice. Oct 30 00:08:16.175894 containerd[1697]: time="2025-10-30T00:08:16.175864101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"35fd81da53dffe4f207d447b86dc6f38733a99a968d80c6fba7aadc63662eba7\" pid:4279 exit_status:1 exited_at:{seconds:1761782896 nanos:175493253}" Oct 30 00:08:16.279864 kubelet[3160]: I1030 00:08:16.279829 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8c2d711-e3d2-49d2-9ce4-f8ddd389b734-whisker-backend-key-pair\") pod \"whisker-64fcb4dd76-8t4c6\" (UID: \"b8c2d711-e3d2-49d2-9ce4-f8ddd389b734\") " pod="calico-system/whisker-64fcb4dd76-8t4c6" Oct 30 00:08:16.279864 kubelet[3160]: I1030 00:08:16.279862 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8c2d711-e3d2-49d2-9ce4-f8ddd389b734-whisker-ca-bundle\") pod \"whisker-64fcb4dd76-8t4c6\" (UID: \"b8c2d711-e3d2-49d2-9ce4-f8ddd389b734\") " pod="calico-system/whisker-64fcb4dd76-8t4c6" Oct 30 00:08:16.280125 kubelet[3160]: I1030 00:08:16.279877 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcpf5\" (UniqueName: \"kubernetes.io/projected/b8c2d711-e3d2-49d2-9ce4-f8ddd389b734-kube-api-access-vcpf5\") pod \"whisker-64fcb4dd76-8t4c6\" (UID: \"b8c2d711-e3d2-49d2-9ce4-f8ddd389b734\") " pod="calico-system/whisker-64fcb4dd76-8t4c6" Oct 30 00:08:16.472036 containerd[1697]: time="2025-10-30T00:08:16.471969722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fcb4dd76-8t4c6,Uid:b8c2d711-e3d2-49d2-9ce4-f8ddd389b734,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:16.621264 systemd-networkd[1334]: cali4a39e2de188: Link UP Oct 30 00:08:16.621966 systemd-networkd[1334]: cali4a39e2de188: Gained carrier Oct 30 00:08:16.638972 containerd[1697]: 2025-10-30 00:08:16.514 [INFO][4385] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:08:16.638972 containerd[1697]: 2025-10-30 00:08:16.523 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0 whisker-64fcb4dd76- calico-system b8c2d711-e3d2-49d2-9ce4-f8ddd389b734 959 0 2025-10-30 00:08:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64fcb4dd76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 whisker-64fcb4dd76-8t4c6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a39e2de188 [] [] }} ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-" Oct 30 00:08:16.638972 containerd[1697]: 2025-10-30 00:08:16.523 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.638972 containerd[1697]: 2025-10-30 00:08:16.560 [INFO][4407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" HandleID="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Workload="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.560 [INFO][4407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" HandleID="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Workload="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-666d628454", "pod":"whisker-64fcb4dd76-8t4c6", "timestamp":"2025-10-30 00:08:16.560107078 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.561 [INFO][4407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.561 [INFO][4407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.561 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.566 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.569 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.573 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.577 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639159 containerd[1697]: 2025-10-30 00:08:16.580 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.580 [INFO][4407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.582 [INFO][4407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5 Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.587 [INFO][4407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.595 [INFO][4407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.65/26] block=192.168.14.64/26 handle="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.595 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.65/26] handle="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.595 [INFO][4407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:16.639368 containerd[1697]: 2025-10-30 00:08:16.595 [INFO][4407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.65/26] IPv6=[] ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" HandleID="k8s-pod-network.d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Workload="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.639491 containerd[1697]: 2025-10-30 00:08:16.599 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0", GenerateName:"whisker-64fcb4dd76-", Namespace:"calico-system", SelfLink:"", UID:"b8c2d711-e3d2-49d2-9ce4-f8ddd389b734", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64fcb4dd76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"whisker-64fcb4dd76-8t4c6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a39e2de188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:16.639491 containerd[1697]: 2025-10-30 00:08:16.599 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.65/32] ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.639568 containerd[1697]: 2025-10-30 00:08:16.599 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a39e2de188 ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.639568 containerd[1697]: 2025-10-30 00:08:16.622 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.639608 containerd[1697]: 2025-10-30 00:08:16.622 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0", GenerateName:"whisker-64fcb4dd76-", Namespace:"calico-system", SelfLink:"", UID:"b8c2d711-e3d2-49d2-9ce4-f8ddd389b734", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64fcb4dd76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5", Pod:"whisker-64fcb4dd76-8t4c6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a39e2de188", MAC:"b2:3e:30:55:7f:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:16.639659 containerd[1697]: 2025-10-30 00:08:16.636 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" Namespace="calico-system" Pod="whisker-64fcb4dd76-8t4c6" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-whisker--64fcb4dd76--8t4c6-eth0" Oct 30 00:08:16.683197 containerd[1697]: time="2025-10-30T00:08:16.683162854Z" level=info msg="connecting to shim d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5" address="unix:///run/containerd/s/b349030102f268a38d921638d6bcb1946f05182d29a4ae0a30d7e4f33fdbfbeb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:16.705405 systemd[1]: Started cri-containerd-d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5.scope - libcontainer container d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5. Oct 30 00:08:16.761393 containerd[1697]: time="2025-10-30T00:08:16.761366282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fcb4dd76-8t4c6,Uid:b8c2d711-e3d2-49d2-9ce4-f8ddd389b734,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3458e2ecce7692c131f2fee66466e73371f065b890c2d2ba33f4b92e6117ce5\"" Oct 30 00:08:16.762762 containerd[1697]: time="2025-10-30T00:08:16.762742504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:08:16.918034 systemd-networkd[1334]: vxlan.calico: Link UP Oct 30 00:08:16.918168 systemd-networkd[1334]: vxlan.calico: Gained carrier Oct 30 00:08:16.931458 kubelet[3160]: I1030 00:08:16.931429 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc436dc2-0f71-481b-9a03-aea4931c7123" path="/var/lib/kubelet/pods/fc436dc2-0f71-481b-9a03-aea4931c7123/volumes" Oct 30 00:08:17.012565 containerd[1697]: time="2025-10-30T00:08:17.012507051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:17.015361 containerd[1697]: time="2025-10-30T00:08:17.015332899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:08:17.015419 containerd[1697]: time="2025-10-30T00:08:17.015394140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:08:17.015538 kubelet[3160]: E1030 00:08:17.015493 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:17.015591 kubelet[3160]: E1030 00:08:17.015551 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:17.016225 kubelet[3160]: E1030 00:08:17.016185 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e70db6a829a44c9bf10fd58f8144dc1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:17.018347 containerd[1697]: time="2025-10-30T00:08:17.018137464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:08:17.322217 containerd[1697]: time="2025-10-30T00:08:17.322076062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:17.325002 containerd[1697]: time="2025-10-30T00:08:17.324973130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:08:17.325380 containerd[1697]: time="2025-10-30T00:08:17.325027384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:17.325418 kubelet[3160]: E1030 00:08:17.325153 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:17.325418 kubelet[3160]: E1030 00:08:17.325191 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:17.325616 kubelet[3160]: E1030 00:08:17.325310 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:17.326600 kubelet[3160]: E1030 00:08:17.326570 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:08:18.085999 kubelet[3160]: E1030 00:08:18.085931 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:08:18.481407 systemd-networkd[1334]: vxlan.calico: Gained IPv6LL Oct 30 00:08:18.609738 systemd-networkd[1334]: cali4a39e2de188: Gained IPv6LL Oct 30 00:08:21.927512 containerd[1697]: time="2025-10-30T00:08:21.927477221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb6d876d-8qgfk,Uid:edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:22.026492 systemd-networkd[1334]: cali5a751677fe8: Link UP Oct 30 00:08:22.026605 systemd-networkd[1334]: cali5a751677fe8: Gained carrier Oct 30 00:08:22.041834 containerd[1697]: 2025-10-30 00:08:21.964 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0 calico-kube-controllers-ffb6d876d- calico-system edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d 892 0 2025-10-30 00:07:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ffb6d876d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 calico-kube-controllers-ffb6d876d-8qgfk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5a751677fe8 [] [] }} ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-" Oct 30 00:08:22.041834 containerd[1697]: 2025-10-30 00:08:21.964 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.041834 containerd[1697]: 2025-10-30 00:08:21.985 [INFO][4573] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" HandleID="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Workload="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:21.985 [INFO][4573] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" HandleID="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Workload="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-666d628454", "pod":"calico-kube-controllers-ffb6d876d-8qgfk", "timestamp":"2025-10-30 00:08:21.985680425 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:21.985 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:21.985 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:21.985 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:21.992 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:22.000 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:22.002 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:22.004 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042028 containerd[1697]: 2025-10-30 00:08:22.005 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.005 [INFO][4573] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.006 [INFO][4573] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358 Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.013 [INFO][4573] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.022 [INFO][4573] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.66/26] block=192.168.14.64/26 handle="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.023 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.66/26] handle="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.023 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:22.042251 containerd[1697]: 2025-10-30 00:08:22.023 [INFO][4573] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.66/26] IPv6=[] ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" HandleID="k8s-pod-network.106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Workload="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.042494 containerd[1697]: 2025-10-30 00:08:22.024 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0", GenerateName:"calico-kube-controllers-ffb6d876d-", Namespace:"calico-system", SelfLink:"", UID:"edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb6d876d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"calico-kube-controllers-ffb6d876d-8qgfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5a751677fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:22.042565 containerd[1697]: 2025-10-30 00:08:22.024 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.66/32] ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.042565 containerd[1697]: 2025-10-30 00:08:22.024 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a751677fe8 ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.042565 containerd[1697]: 2025-10-30 00:08:22.026 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.042831 containerd[1697]: 2025-10-30 00:08:22.027 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0", GenerateName:"calico-kube-controllers-ffb6d876d-", Namespace:"calico-system", SelfLink:"", UID:"edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb6d876d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358", Pod:"calico-kube-controllers-ffb6d876d-8qgfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5a751677fe8", MAC:"7e:46:c2:a0:25:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:22.042902 containerd[1697]: 2025-10-30 00:08:22.038 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" Namespace="calico-system" Pod="calico-kube-controllers-ffb6d876d-8qgfk" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--kube--controllers--ffb6d876d--8qgfk-eth0" Oct 30 00:08:22.106580 containerd[1697]: time="2025-10-30T00:08:22.106548508Z" level=info msg="connecting to shim 106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358" address="unix:///run/containerd/s/fc6c346b786f58c867e8f9a19599210ccd249c6f88dd80dabb7746917b3b1420" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:22.127433 systemd[1]: Started cri-containerd-106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358.scope - libcontainer container 106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358. Oct 30 00:08:22.161788 containerd[1697]: time="2025-10-30T00:08:22.161766131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb6d876d-8qgfk,Uid:edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"106683da5fa0d457eb8e432bea2d6cbf129a50bfeacbc1684e1ee453bae7e358\"" Oct 30 00:08:22.163295 containerd[1697]: time="2025-10-30T00:08:22.163228550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:08:22.424304 containerd[1697]: time="2025-10-30T00:08:22.424253492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:22.428189 containerd[1697]: time="2025-10-30T00:08:22.428150469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:08:22.428332 containerd[1697]: time="2025-10-30T00:08:22.428209967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:22.428367 kubelet[3160]: E1030 00:08:22.428339 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:22.428624 kubelet[3160]: E1030 00:08:22.428377 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:22.428624 kubelet[3160]: E1030 00:08:22.428510 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wz2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:22.430350 kubelet[3160]: E1030 00:08:22.430321 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:22.928635 containerd[1697]: time="2025-10-30T00:08:22.928065565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-44scr,Uid:b7023f33-bcd5-455f-bb39-ef094539fe80,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:08:23.017074 systemd-networkd[1334]: cali8d8a9a264f7: Link UP Oct 30 00:08:23.017465 systemd-networkd[1334]: cali8d8a9a264f7: Gained carrier Oct 30 00:08:23.031385 containerd[1697]: 2025-10-30 00:08:22.964 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0 calico-apiserver-d4dc65c88- calico-apiserver b7023f33-bcd5-455f-bb39-ef094539fe80 886 0 2025-10-30 00:07:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4dc65c88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 calico-apiserver-d4dc65c88-44scr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8d8a9a264f7 [] [] }} ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-" Oct 30 00:08:23.031385 containerd[1697]: 2025-10-30 00:08:22.964 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.031385 containerd[1697]: 2025-10-30 00:08:22.986 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" HandleID="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.986 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" HandleID="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-666d628454", "pod":"calico-apiserver-d4dc65c88-44scr", "timestamp":"2025-10-30 00:08:22.986063367 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.986 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.986 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.986 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.989 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.993 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.996 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:22.998 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031757 containerd[1697]: 2025-10-30 00:08:23.000 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.000 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.001 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48 Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.005 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.012 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.67/26] block=192.168.14.64/26 handle="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.012 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.67/26] handle="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.012 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:23.031986 containerd[1697]: 2025-10-30 00:08:23.012 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.67/26] IPv6=[] ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" HandleID="k8s-pod-network.54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.032135 containerd[1697]: 2025-10-30 00:08:23.014 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0", GenerateName:"calico-apiserver-d4dc65c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7023f33-bcd5-455f-bb39-ef094539fe80", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4dc65c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"calico-apiserver-d4dc65c88-44scr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d8a9a264f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:23.032200 containerd[1697]: 2025-10-30 00:08:23.014 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.67/32] ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.032200 containerd[1697]: 2025-10-30 00:08:23.014 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d8a9a264f7 ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.032200 containerd[1697]: 2025-10-30 00:08:23.017 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.032255 containerd[1697]: 2025-10-30 00:08:23.017 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0", GenerateName:"calico-apiserver-d4dc65c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7023f33-bcd5-455f-bb39-ef094539fe80", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4dc65c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48", Pod:"calico-apiserver-d4dc65c88-44scr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d8a9a264f7", MAC:"d6:37:87:db:60:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:23.033068 containerd[1697]: 2025-10-30 00:08:23.028 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-44scr" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--44scr-eth0" Oct 30 00:08:23.093528 kubelet[3160]: E1030 00:08:23.093495 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:23.121775 containerd[1697]: time="2025-10-30T00:08:23.121244511Z" level=info msg="connecting to shim 54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48" address="unix:///run/containerd/s/748cfec3b246735e6a8bdd871103931c4acff58c84b27a81820bd6390c6e1dd9" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:23.142401 systemd[1]: Started cri-containerd-54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48.scope - libcontainer container 54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48. Oct 30 00:08:23.176461 containerd[1697]: time="2025-10-30T00:08:23.176444578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-44scr,Uid:b7023f33-bcd5-455f-bb39-ef094539fe80,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"54f3a6538fb33bd10e2fed3f8b523c3a8bc80f5b50cafa54a64074b667543c48\"" Oct 30 00:08:23.177434 containerd[1697]: time="2025-10-30T00:08:23.177408727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:23.436520 containerd[1697]: time="2025-10-30T00:08:23.436493281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:23.447964 containerd[1697]: time="2025-10-30T00:08:23.447940299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:23.448019 containerd[1697]: time="2025-10-30T00:08:23.447991112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:23.448109 kubelet[3160]: E1030 00:08:23.448084 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:23.448578 kubelet[3160]: E1030 00:08:23.448117 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:23.448578 kubelet[3160]: E1030 00:08:23.448265 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:23.449837 kubelet[3160]: E1030 00:08:23.449811 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:23.928376 containerd[1697]: time="2025-10-30T00:08:23.928117559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dlvqm,Uid:69b7796b-0241-4712-b4ee-3f03c5de49ac,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:23.928376 containerd[1697]: time="2025-10-30T00:08:23.928150342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffnpb,Uid:125a213b-c54a-4db2-bdd4-c80c7c20641e,Namespace:kube-system,Attempt:0,}" Oct 30 00:08:23.928376 containerd[1697]: time="2025-10-30T00:08:23.928117572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr5np,Uid:e474ec4d-2a3f-4853-a5fa-0c20bb4f628a,Namespace:kube-system,Attempt:0,}" Oct 30 00:08:23.928637 containerd[1697]: time="2025-10-30T00:08:23.928616687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgwld,Uid:a3b96faf-6434-4c32-bdb2-a83d279f75ef,Namespace:calico-system,Attempt:0,}" Oct 30 00:08:23.986395 systemd-networkd[1334]: cali5a751677fe8: Gained IPv6LL Oct 30 00:08:24.097669 kubelet[3160]: E1030 00:08:24.097640 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:24.098190 kubelet[3160]: E1030 00:08:24.097900 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:24.109192 systemd-networkd[1334]: cali846a18eff40: Link UP Oct 30 00:08:24.112374 systemd-networkd[1334]: cali846a18eff40: Gained carrier Oct 30 00:08:24.133227 containerd[1697]: 2025-10-30 00:08:24.007 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0 coredns-674b8bbfcf- kube-system 125a213b-c54a-4db2-bdd4-c80c7c20641e 893 0 2025-10-30 00:07:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 coredns-674b8bbfcf-ffnpb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali846a18eff40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-" Oct 30 00:08:24.133227 containerd[1697]: 2025-10-30 00:08:24.007 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.133227 containerd[1697]: 2025-10-30 00:08:24.069 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" HandleID="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.070 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" HandleID="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-666d628454", "pod":"coredns-674b8bbfcf-ffnpb", "timestamp":"2025-10-30 00:08:24.069478153 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.070 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.070 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.070 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.080 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.084 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.086 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.087 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.133992 containerd[1697]: 2025-10-30 00:08:24.089 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.089 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.090 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176 Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.094 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.68/26] block=192.168.14.64/26 handle="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.68/26] handle="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:24.134217 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.68/26] IPv6=[] ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" HandleID="k8s-pod-network.870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.104 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"125a213b-c54a-4db2-bdd4-c80c7c20641e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"coredns-674b8bbfcf-ffnpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali846a18eff40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.105 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.68/32] ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.105 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali846a18eff40 ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.115 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.116 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"125a213b-c54a-4db2-bdd4-c80c7c20641e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176", Pod:"coredns-674b8bbfcf-ffnpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali846a18eff40", MAC:"f2:23:2f:b0:f0:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.134804 containerd[1697]: 2025-10-30 00:08:24.131 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffnpb" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--ffnpb-eth0" Oct 30 00:08:24.176451 containerd[1697]: time="2025-10-30T00:08:24.176393371Z" level=info msg="connecting to shim 870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176" address="unix:///run/containerd/s/c4d8322274c5fe0906d68774a1fcdb456cbaa8b5fb591634e39b0b29d2c46e19" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:24.200413 systemd[1]: Started cri-containerd-870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176.scope - libcontainer container 870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176. Oct 30 00:08:24.212069 systemd-networkd[1334]: calif7ec3c13827: Link UP Oct 30 00:08:24.213026 systemd-networkd[1334]: calif7ec3c13827: Gained carrier Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.011 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0 coredns-674b8bbfcf- kube-system e474ec4d-2a3f-4853-a5fa-0c20bb4f628a 891 0 2025-10-30 00:07:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 coredns-674b8bbfcf-sr5np eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7ec3c13827 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.011 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.072 [INFO][4770] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" HandleID="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.072 [INFO][4770] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" HandleID="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-666d628454", "pod":"coredns-674b8bbfcf-sr5np", "timestamp":"2025-10-30 00:08:24.072336518 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.072 [INFO][4770] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4770] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.103 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.180 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.185 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.187 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.189 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.192 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.192 [INFO][4770] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.193 [INFO][4770] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.199 [INFO][4770] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.206 [INFO][4770] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.69/26] block=192.168.14.64/26 handle="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.206 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.69/26] handle="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.206 [INFO][4770] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:24.231715 containerd[1697]: 2025-10-30 00:08:24.206 [INFO][4770] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.69/26] IPv6=[] ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" HandleID="k8s-pod-network.86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Workload="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.209 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e474ec4d-2a3f-4853-a5fa-0c20bb4f628a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"coredns-674b8bbfcf-sr5np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7ec3c13827", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.209 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.69/32] ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.209 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7ec3c13827 ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.213 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.214 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e474ec4d-2a3f-4853-a5fa-0c20bb4f628a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba", Pod:"coredns-674b8bbfcf-sr5np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7ec3c13827", MAC:"6a:2c:29:05:62:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.233013 containerd[1697]: 2025-10-30 00:08:24.229 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr5np" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-coredns--674b8bbfcf--sr5np-eth0" Oct 30 00:08:24.266492 containerd[1697]: time="2025-10-30T00:08:24.266445901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffnpb,Uid:125a213b-c54a-4db2-bdd4-c80c7c20641e,Namespace:kube-system,Attempt:0,} returns sandbox id \"870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176\"" Oct 30 00:08:24.275171 containerd[1697]: time="2025-10-30T00:08:24.274647756Z" level=info msg="CreateContainer within sandbox \"870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:08:24.335407 containerd[1697]: time="2025-10-30T00:08:24.335380831Z" level=info msg="connecting to shim 86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba" address="unix:///run/containerd/s/7bfe6266e7681224191045a09fe5f92d5b5e85b4801a4acec7b95a048c51a6c8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:24.346341 containerd[1697]: time="2025-10-30T00:08:24.345825627Z" level=info msg="Container 2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:24.348808 systemd-networkd[1334]: cali892d0fc7a15: Link UP Oct 30 00:08:24.352508 systemd-networkd[1334]: cali892d0fc7a15: Gained carrier Oct 30 00:08:24.362432 systemd[1]: Started cri-containerd-86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba.scope - libcontainer container 86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba. Oct 30 00:08:24.368151 containerd[1697]: time="2025-10-30T00:08:24.367935555Z" level=info msg="CreateContainer within sandbox \"870ac60096128d51e21284f8f2bac922a57ebe1773f3b45d0275cc4309cd0176\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc\"" Oct 30 00:08:24.368518 containerd[1697]: time="2025-10-30T00:08:24.368416248Z" level=info msg="StartContainer for \"2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc\"" Oct 30 00:08:24.370008 containerd[1697]: time="2025-10-30T00:08:24.369875504Z" level=info msg="connecting to shim 2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc" address="unix:///run/containerd/s/c4d8322274c5fe0906d68774a1fcdb456cbaa8b5fb591634e39b0b29d2c46e19" protocol=ttrpc version=3 Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.038 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0 goldmane-666569f655- calico-system 69b7796b-0241-4712-b4ee-3f03c5de49ac 890 0 2025-10-30 00:07:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 goldmane-666569f655-dlvqm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali892d0fc7a15 [] [] }} ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.038 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.074 [INFO][4779] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" HandleID="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Workload="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.074 [INFO][4779] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" HandleID="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Workload="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-666d628454", "pod":"goldmane-666569f655-dlvqm", "timestamp":"2025-10-30 00:08:24.074387493 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.074 [INFO][4779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.207 [INFO][4779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.207 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.281 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.285 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.321 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.323 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.324 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.324 [INFO][4779] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.325 [INFO][4779] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68 Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.329 [INFO][4779] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4779] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.70/26] block=192.168.14.64/26 handle="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.70/26] handle="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:24.370609 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4779] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.70/26] IPv6=[] ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" HandleID="k8s-pod-network.197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Workload="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.341 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"69b7796b-0241-4712-b4ee-3f03c5de49ac", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"goldmane-666569f655-dlvqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali892d0fc7a15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.341 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.70/32] ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.341 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali892d0fc7a15 ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.355 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.355 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"69b7796b-0241-4712-b4ee-3f03c5de49ac", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68", Pod:"goldmane-666569f655-dlvqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali892d0fc7a15", MAC:"e2:1e:82:2a:9b:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.371632 containerd[1697]: 2025-10-30 00:08:24.368 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" Namespace="calico-system" Pod="goldmane-666569f655-dlvqm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-goldmane--666569f655--dlvqm-eth0" Oct 30 00:08:24.394684 systemd[1]: Started cri-containerd-2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc.scope - libcontainer container 2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc. Oct 30 00:08:24.413125 containerd[1697]: time="2025-10-30T00:08:24.413071360Z" level=info msg="connecting to shim 197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68" address="unix:///run/containerd/s/27ddc79b26396b8ace1c15e8f1a60e14aafd9f490d5cbc9dcdcee2437dd65ac7" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:24.457303 systemd-networkd[1334]: calie4eeba35fc3: Link UP Oct 30 00:08:24.457464 systemd[1]: Started cri-containerd-197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68.scope - libcontainer container 197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68. Oct 30 00:08:24.458467 systemd-networkd[1334]: calie4eeba35fc3: Gained carrier Oct 30 00:08:24.464063 containerd[1697]: time="2025-10-30T00:08:24.463945638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr5np,Uid:e474ec4d-2a3f-4853-a5fa-0c20bb4f628a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba\"" Oct 30 00:08:24.478919 containerd[1697]: time="2025-10-30T00:08:24.478865623Z" level=info msg="CreateContainer within sandbox \"86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.038 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0 csi-node-driver- calico-system a3b96faf-6434-4c32-bdb2-a83d279f75ef 713 0 2025-10-30 00:07:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 csi-node-driver-fgwld eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4eeba35fc3 [] [] }} ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.038 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.085 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" HandleID="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Workload="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.085 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" HandleID="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Workload="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-666d628454", "pod":"csi-node-driver-fgwld", "timestamp":"2025-10-30 00:08:24.085378154 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.085 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.339 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.387 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.392 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.423 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.425 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.427 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.427 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.429 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1 Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.438 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.447 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.71/26] block=192.168.14.64/26 handle="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.447 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.71/26] handle="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.447 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:24.484307 containerd[1697]: 2025-10-30 00:08:24.447 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.71/26] IPv6=[] ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" HandleID="k8s-pod-network.d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Workload="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.450 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3b96faf-6434-4c32-bdb2-a83d279f75ef", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"csi-node-driver-fgwld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4eeba35fc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.450 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.71/32] ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.450 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4eeba35fc3 ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.462 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.464 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3b96faf-6434-4c32-bdb2-a83d279f75ef", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1", Pod:"csi-node-driver-fgwld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4eeba35fc3", MAC:"b2:54:3e:ca:97:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:24.485710 containerd[1697]: 2025-10-30 00:08:24.481 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" Namespace="calico-system" Pod="csi-node-driver-fgwld" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-csi--node--driver--fgwld-eth0" Oct 30 00:08:24.504112 containerd[1697]: time="2025-10-30T00:08:24.504011112Z" level=info msg="StartContainer for \"2ed270bdafff85737254aef9016cac6535f39e84c1a4ad6fbea50cffebe1b5fc\" returns successfully" Oct 30 00:08:24.521510 containerd[1697]: time="2025-10-30T00:08:24.521478732Z" level=info msg="Container a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:08:24.547518 containerd[1697]: time="2025-10-30T00:08:24.547462151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dlvqm,Uid:69b7796b-0241-4712-b4ee-3f03c5de49ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"197fa2b916753a69ddf061d79e8d72e9aa5fa1d9258dabd1d0c631b10cf4da68\"" Oct 30 00:08:24.548288 containerd[1697]: time="2025-10-30T00:08:24.548211470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:08:24.554849 containerd[1697]: time="2025-10-30T00:08:24.554833733Z" level=info msg="CreateContainer within sandbox \"86f713ab0d8a7c696d84a60efc39b7c12c3546db4a4c23eec3fde2b7ebc437ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9\"" Oct 30 00:08:24.556227 containerd[1697]: time="2025-10-30T00:08:24.555328599Z" level=info msg="StartContainer for \"a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9\"" Oct 30 00:08:24.556227 containerd[1697]: time="2025-10-30T00:08:24.555945957Z" level=info msg="connecting to shim a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9" address="unix:///run/containerd/s/7bfe6266e7681224191045a09fe5f92d5b5e85b4801a4acec7b95a048c51a6c8" protocol=ttrpc version=3 Oct 30 00:08:24.575427 systemd[1]: Started cri-containerd-a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9.scope - libcontainer container a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9. Oct 30 00:08:24.578604 containerd[1697]: time="2025-10-30T00:08:24.578581255Z" level=info msg="connecting to shim d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1" address="unix:///run/containerd/s/5d8cce902d5e0299faaae61fc2b6d66a9549138cb2a56281bd1b8aa9d39c2662" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:24.604109 systemd[1]: Started cri-containerd-d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1.scope - libcontainer container d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1. Oct 30 00:08:24.614846 containerd[1697]: time="2025-10-30T00:08:24.614827687Z" level=info msg="StartContainer for \"a769d539eab3730d955773cd6ca3426df6d93833bbba4b4d5692206683cac0c9\" returns successfully" Oct 30 00:08:24.637865 containerd[1697]: time="2025-10-30T00:08:24.637844792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgwld,Uid:a3b96faf-6434-4c32-bdb2-a83d279f75ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"d39899fac47ef5bbbbbfa1da974ba1bf929450ec7ef3260e280d9a09f86ec3a1\"" Oct 30 00:08:24.811638 containerd[1697]: time="2025-10-30T00:08:24.811616589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:24.815048 containerd[1697]: time="2025-10-30T00:08:24.815025166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:08:24.815101 containerd[1697]: time="2025-10-30T00:08:24.815080430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:24.815247 kubelet[3160]: E1030 00:08:24.815208 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:24.815744 kubelet[3160]: E1030 00:08:24.815255 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:24.815744 kubelet[3160]: E1030 00:08:24.815468 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndm4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:24.815869 containerd[1697]: time="2025-10-30T00:08:24.815432648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:08:24.817128 kubelet[3160]: E1030 00:08:24.817087 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:24.927686 containerd[1697]: time="2025-10-30T00:08:24.927644390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-vhhsm,Uid:4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:08:24.945418 systemd-networkd[1334]: cali8d8a9a264f7: Gained IPv6LL Oct 30 00:08:25.018412 systemd-networkd[1334]: cali4fec0738c38: Link UP Oct 30 00:08:25.018588 systemd-networkd[1334]: cali4fec0738c38: Gained carrier Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.965 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0 calico-apiserver-d4dc65c88- calico-apiserver 4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf 894 0 2025-10-30 00:07:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4dc65c88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-666d628454 calico-apiserver-d4dc65c88-vhhsm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fec0738c38 [] [] }} ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.966 [INFO][5083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.988 [INFO][5094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" HandleID="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.988 [INFO][5094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" HandleID="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-666d628454", "pod":"calico-apiserver-d4dc65c88-vhhsm", "timestamp":"2025-10-30 00:08:24.988239088 +0000 UTC"}, Hostname:"ci-4459.1.0-n-666d628454", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.988 [INFO][5094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.988 [INFO][5094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.988 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-666d628454' Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.992 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.995 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.997 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:24.998 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.000 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.000 [INFO][5094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.001 [INFO][5094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55 Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.004 [INFO][5094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.014 [INFO][5094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.72/26] block=192.168.14.64/26 handle="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.014 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.72/26] handle="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" host="ci-4459.1.0-n-666d628454" Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.014 [INFO][5094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:08:25.029086 containerd[1697]: 2025-10-30 00:08:25.014 [INFO][5094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.72/26] IPv6=[] ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" HandleID="k8s-pod-network.10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Workload="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.015 [INFO][5083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0", GenerateName:"calico-apiserver-d4dc65c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4dc65c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"", Pod:"calico-apiserver-d4dc65c88-vhhsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fec0738c38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.015 [INFO][5083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.72/32] ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.015 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fec0738c38 ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.018 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.018 [INFO][5083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0", GenerateName:"calico-apiserver-d4dc65c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4dc65c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-666d628454", ContainerID:"10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55", Pod:"calico-apiserver-d4dc65c88-vhhsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fec0738c38", MAC:"3e:5c:e2:75:c5:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:08:25.030789 containerd[1697]: 2025-10-30 00:08:25.027 [INFO][5083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" Namespace="calico-apiserver" Pod="calico-apiserver-d4dc65c88-vhhsm" WorkloadEndpoint="ci--4459.1.0--n--666d628454-k8s-calico--apiserver--d4dc65c88--vhhsm-eth0" Oct 30 00:08:25.054980 containerd[1697]: time="2025-10-30T00:08:25.054959809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:25.071216 containerd[1697]: time="2025-10-30T00:08:25.071153272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:08:25.071216 containerd[1697]: time="2025-10-30T00:08:25.071210920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:08:25.071450 kubelet[3160]: E1030 00:08:25.071394 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:25.071450 kubelet[3160]: E1030 00:08:25.071435 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:25.072167 kubelet[3160]: E1030 00:08:25.071540 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:25.073097 containerd[1697]: time="2025-10-30T00:08:25.073077265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:08:25.084534 containerd[1697]: time="2025-10-30T00:08:25.084509926Z" level=info msg="connecting to shim 10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55" address="unix:///run/containerd/s/cab802b263e3dadee5de667e16875c5d40313c77889f008b6795b2a20ece239d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:08:25.101948 kubelet[3160]: E1030 00:08:25.101922 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:25.103430 systemd[1]: Started cri-containerd-10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55.scope - libcontainer container 10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55. Oct 30 00:08:25.108998 kubelet[3160]: E1030 00:08:25.108911 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:25.136656 kubelet[3160]: I1030 00:08:25.136407 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sr5np" podStartSLOduration=72.136393764 podStartE2EDuration="1m12.136393764s" podCreationTimestamp="2025-10-30 00:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:08:25.135591826 +0000 UTC m=+78.287526066" watchObservedRunningTime="2025-10-30 00:08:25.136393764 +0000 UTC m=+78.288328002" Oct 30 00:08:25.155047 kubelet[3160]: I1030 00:08:25.154706 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ffnpb" podStartSLOduration=72.154696885 podStartE2EDuration="1m12.154696885s" podCreationTimestamp="2025-10-30 00:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:08:25.154566743 +0000 UTC m=+78.306500983" watchObservedRunningTime="2025-10-30 00:08:25.154696885 +0000 UTC m=+78.306631126" Oct 30 00:08:25.186834 containerd[1697]: time="2025-10-30T00:08:25.186783271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4dc65c88-vhhsm,Uid:4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10b3298edaba5306f0105f7dcae2d0d7df3cc097811eba72c1c7f4aef9ccbf55\"" Oct 30 00:08:25.330364 systemd-networkd[1334]: calif7ec3c13827: Gained IPv6LL Oct 30 00:08:25.340448 containerd[1697]: time="2025-10-30T00:08:25.340421103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:25.343031 containerd[1697]: time="2025-10-30T00:08:25.343006694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:08:25.343105 containerd[1697]: time="2025-10-30T00:08:25.343055432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:08:25.343226 kubelet[3160]: E1030 00:08:25.343198 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:25.343265 kubelet[3160]: E1030 00:08:25.343233 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:25.343468 kubelet[3160]: E1030 00:08:25.343429 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:25.344241 containerd[1697]: time="2025-10-30T00:08:25.343572552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:25.345475 kubelet[3160]: E1030 00:08:25.345445 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:25.457366 systemd-networkd[1334]: cali892d0fc7a15: Gained IPv6LL Oct 30 00:08:25.620856 containerd[1697]: time="2025-10-30T00:08:25.620756686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:25.623984 containerd[1697]: time="2025-10-30T00:08:25.623960548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:25.624056 containerd[1697]: time="2025-10-30T00:08:25.624007762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:25.624180 kubelet[3160]: E1030 00:08:25.624151 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:25.624212 kubelet[3160]: E1030 00:08:25.624188 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:25.624542 kubelet[3160]: E1030 00:08:25.624313 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzbqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:25.625885 kubelet[3160]: E1030 00:08:25.625829 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:08:25.713350 systemd-networkd[1334]: calie4eeba35fc3: Gained IPv6LL Oct 30 00:08:25.970353 systemd-networkd[1334]: cali846a18eff40: Gained IPv6LL Oct 30 00:08:26.112304 kubelet[3160]: E1030 00:08:26.111992 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:08:26.113523 kubelet[3160]: E1030 00:08:26.113480 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:26.113609 kubelet[3160]: E1030 00:08:26.113503 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:26.353362 systemd-networkd[1334]: cali4fec0738c38: Gained IPv6LL Oct 30 00:08:27.112272 kubelet[3160]: E1030 00:08:27.112176 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:08:30.928946 containerd[1697]: time="2025-10-30T00:08:30.928806938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:08:31.188398 containerd[1697]: time="2025-10-30T00:08:31.188263608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:31.193917 containerd[1697]: time="2025-10-30T00:08:31.193891389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:08:31.193970 containerd[1697]: time="2025-10-30T00:08:31.193944229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:08:31.194068 kubelet[3160]: E1030 00:08:31.194024 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:31.194437 kubelet[3160]: E1030 00:08:31.194076 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:31.194437 kubelet[3160]: E1030 00:08:31.194176 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e70db6a829a44c9bf10fd58f8144dc1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:31.196341 containerd[1697]: time="2025-10-30T00:08:31.196292088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:08:31.440597 containerd[1697]: time="2025-10-30T00:08:31.440487316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:31.443370 containerd[1697]: time="2025-10-30T00:08:31.443313817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:08:31.443370 containerd[1697]: time="2025-10-30T00:08:31.443354485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:31.443472 kubelet[3160]: E1030 00:08:31.443439 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:31.443513 kubelet[3160]: E1030 00:08:31.443479 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:31.443606 kubelet[3160]: E1030 00:08:31.443578 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:31.444996 kubelet[3160]: E1030 00:08:31.444922 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:08:35.928420 containerd[1697]: time="2025-10-30T00:08:35.928335771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:36.176455 containerd[1697]: time="2025-10-30T00:08:36.176418032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:36.180194 containerd[1697]: time="2025-10-30T00:08:36.179970800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:36.180194 containerd[1697]: time="2025-10-30T00:08:36.179987975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:36.180380 kubelet[3160]: E1030 00:08:36.180114 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:36.180380 kubelet[3160]: E1030 00:08:36.180147 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:36.180380 kubelet[3160]: E1030 00:08:36.180265 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:36.181845 kubelet[3160]: E1030 00:08:36.181448 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:37.927801 containerd[1697]: time="2025-10-30T00:08:37.927770477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:08:38.169205 containerd[1697]: time="2025-10-30T00:08:38.169155479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:38.183587 containerd[1697]: time="2025-10-30T00:08:38.183488842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:08:38.183587 containerd[1697]: time="2025-10-30T00:08:38.183545422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:38.183801 kubelet[3160]: E1030 00:08:38.183758 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:38.184009 kubelet[3160]: E1030 00:08:38.183814 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:38.184009 kubelet[3160]: E1030 00:08:38.183934 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndm4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:38.185359 kubelet[3160]: E1030 00:08:38.185326 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:38.928834 containerd[1697]: time="2025-10-30T00:08:38.928664536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:08:39.175783 containerd[1697]: time="2025-10-30T00:08:39.175756455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:39.182236 containerd[1697]: time="2025-10-30T00:08:39.182168679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:08:39.182236 containerd[1697]: time="2025-10-30T00:08:39.182218527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:39.182523 kubelet[3160]: E1030 00:08:39.182317 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:39.182523 kubelet[3160]: E1030 00:08:39.182349 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:39.182523 kubelet[3160]: E1030 00:08:39.182464 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wz2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:39.183859 kubelet[3160]: E1030 00:08:39.183823 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:41.928630 containerd[1697]: time="2025-10-30T00:08:41.928406698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:42.177604 containerd[1697]: time="2025-10-30T00:08:42.177567350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:42.187561 containerd[1697]: time="2025-10-30T00:08:42.187490873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:42.187561 containerd[1697]: time="2025-10-30T00:08:42.187535993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:42.187762 kubelet[3160]: E1030 00:08:42.187618 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:42.187762 kubelet[3160]: E1030 00:08:42.187651 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:42.188132 kubelet[3160]: E1030 00:08:42.187871 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzbqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:42.188589 containerd[1697]: time="2025-10-30T00:08:42.188332587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:08:42.189663 kubelet[3160]: E1030 00:08:42.189631 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:08:42.440596 containerd[1697]: time="2025-10-30T00:08:42.440519122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:42.443880 containerd[1697]: time="2025-10-30T00:08:42.443844252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:08:42.443929 containerd[1697]: time="2025-10-30T00:08:42.443844254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:08:42.444053 kubelet[3160]: E1030 00:08:42.444023 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:42.444097 kubelet[3160]: E1030 00:08:42.444059 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:42.444207 kubelet[3160]: E1030 00:08:42.444172 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:42.446413 containerd[1697]: time="2025-10-30T00:08:42.446166993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:08:42.704022 containerd[1697]: time="2025-10-30T00:08:42.703933340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:42.707403 containerd[1697]: time="2025-10-30T00:08:42.707359614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:08:42.707480 containerd[1697]: time="2025-10-30T00:08:42.707363330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:08:42.707537 kubelet[3160]: E1030 00:08:42.707508 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:42.707570 kubelet[3160]: E1030 00:08:42.707545 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:42.707665 kubelet[3160]: E1030 00:08:42.707635 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:42.708932 kubelet[3160]: E1030 00:08:42.708912 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:46.141021 containerd[1697]: time="2025-10-30T00:08:46.140980798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"f44f81f68255dc534c4662f7be4011d222490980d740c3f3986a8bda03b5ee35\" pid:5194 exited_at:{seconds:1761782926 nanos:140742861}" Oct 30 00:08:46.930756 kubelet[3160]: E1030 00:08:46.930576 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:08:48.929019 kubelet[3160]: E1030 00:08:48.928766 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:08:50.930686 kubelet[3160]: E1030 00:08:50.930585 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:08:53.928453 kubelet[3160]: E1030 00:08:53.928415 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:08:54.930051 kubelet[3160]: E1030 00:08:54.930003 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:08:56.930481 kubelet[3160]: E1030 00:08:56.930437 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:01.930159 containerd[1697]: time="2025-10-30T00:09:01.929658510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:09:02.215791 containerd[1697]: time="2025-10-30T00:09:02.215664402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:02.218794 containerd[1697]: time="2025-10-30T00:09:02.218765662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:09:02.218895 containerd[1697]: time="2025-10-30T00:09:02.218824288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:09:02.218932 kubelet[3160]: E1030 00:09:02.218903 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:09:02.219222 kubelet[3160]: E1030 00:09:02.218938 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:09:02.220036 kubelet[3160]: E1030 00:09:02.219148 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e70db6a829a44c9bf10fd58f8144dc1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:02.220183 containerd[1697]: time="2025-10-30T00:09:02.219407380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:09:02.503225 containerd[1697]: time="2025-10-30T00:09:02.503133369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:02.507163 containerd[1697]: time="2025-10-30T00:09:02.507128942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:09:02.507241 containerd[1697]: time="2025-10-30T00:09:02.507203068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:02.507436 kubelet[3160]: E1030 00:09:02.507327 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:09:02.507436 kubelet[3160]: E1030 00:09:02.507376 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:09:02.507791 containerd[1697]: time="2025-10-30T00:09:02.507677678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:09:02.507846 kubelet[3160]: E1030 00:09:02.507596 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndm4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:02.509182 kubelet[3160]: E1030 00:09:02.509144 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:09:02.754073 containerd[1697]: time="2025-10-30T00:09:02.753976440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:02.756902 containerd[1697]: time="2025-10-30T00:09:02.756863519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:09:02.756977 containerd[1697]: time="2025-10-30T00:09:02.756932570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:09:02.757069 kubelet[3160]: E1030 00:09:02.757036 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:09:02.757110 kubelet[3160]: E1030 00:09:02.757087 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:09:02.757417 kubelet[3160]: E1030 00:09:02.757190 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:02.758377 kubelet[3160]: E1030 00:09:02.758345 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:09:05.929508 containerd[1697]: time="2025-10-30T00:09:05.929376575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:09:06.175048 containerd[1697]: time="2025-10-30T00:09:06.175010295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:06.184741 containerd[1697]: time="2025-10-30T00:09:06.184492716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:09:06.184741 containerd[1697]: time="2025-10-30T00:09:06.184557328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:06.184851 kubelet[3160]: E1030 00:09:06.184744 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:06.184851 kubelet[3160]: E1030 00:09:06.184777 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:06.185091 kubelet[3160]: E1030 00:09:06.184941 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:06.186427 containerd[1697]: time="2025-10-30T00:09:06.186404016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:09:06.186626 kubelet[3160]: E1030 00:09:06.186600 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:09:06.432448 containerd[1697]: time="2025-10-30T00:09:06.432418974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:06.435220 containerd[1697]: time="2025-10-30T00:09:06.435128600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:09:06.435220 containerd[1697]: time="2025-10-30T00:09:06.435193198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:09:06.435575 kubelet[3160]: E1030 00:09:06.435547 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:09:06.435614 kubelet[3160]: E1030 00:09:06.435587 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:09:06.436316 kubelet[3160]: E1030 00:09:06.435841 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wz2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:06.437609 kubelet[3160]: E1030 00:09:06.437569 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:09:06.930153 containerd[1697]: time="2025-10-30T00:09:06.929557729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:09:07.200024 containerd[1697]: time="2025-10-30T00:09:07.199940483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:07.206052 containerd[1697]: time="2025-10-30T00:09:07.206019294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:09:07.206240 containerd[1697]: time="2025-10-30T00:09:07.206152779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:09:07.206457 kubelet[3160]: E1030 00:09:07.206360 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:09:07.206457 kubelet[3160]: E1030 00:09:07.206411 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:09:07.206904 kubelet[3160]: E1030 00:09:07.206843 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:07.208852 containerd[1697]: time="2025-10-30T00:09:07.208830375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:09:07.467865 containerd[1697]: time="2025-10-30T00:09:07.467789871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:07.473459 containerd[1697]: time="2025-10-30T00:09:07.473433083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:09:07.473537 containerd[1697]: time="2025-10-30T00:09:07.473484043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:09:07.473601 kubelet[3160]: E1030 00:09:07.473568 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:09:07.473659 kubelet[3160]: E1030 00:09:07.473608 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:09:07.473886 kubelet[3160]: E1030 00:09:07.473725 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:07.475111 kubelet[3160]: E1030 00:09:07.475084 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:09:10.928506 containerd[1697]: time="2025-10-30T00:09:10.928471384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:09:11.193356 containerd[1697]: time="2025-10-30T00:09:11.193183269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:11.196419 containerd[1697]: time="2025-10-30T00:09:11.196375717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:09:11.196508 containerd[1697]: time="2025-10-30T00:09:11.196435775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:11.196588 kubelet[3160]: E1030 00:09:11.196539 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:11.196922 kubelet[3160]: E1030 00:09:11.196597 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:11.196922 kubelet[3160]: E1030 00:09:11.196732 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzbqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:11.198221 kubelet[3160]: E1030 00:09:11.198159 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:13.929969 kubelet[3160]: E1030 00:09:13.929922 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:09:15.928506 kubelet[3160]: E1030 00:09:15.928470 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:09:16.260621 containerd[1697]: time="2025-10-30T00:09:16.260588347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"047ac1047155862ff74f25998c05f22de4bcee7a8d5ec42a8d903460f53629ca\" pid:5234 exited_at:{seconds:1761782956 nanos:260371371}" Oct 30 00:09:16.930682 kubelet[3160]: E1030 00:09:16.930457 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:09:18.929646 kubelet[3160]: E1030 00:09:18.929247 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:09:19.928795 kubelet[3160]: E1030 00:09:19.928711 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:09:22.929679 kubelet[3160]: E1030 00:09:22.929601 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:28.930364 systemd[1]: Started sshd@7-10.200.8.44:22-10.200.16.10:33722.service - OpenSSH per-connection server daemon (10.200.16.10:33722). Oct 30 00:09:28.935336 kubelet[3160]: E1030 00:09:28.935296 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:09:29.570567 sshd[5253]: Accepted publickey for core from 10.200.16.10 port 33722 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:29.571875 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:29.581081 systemd-logind[1672]: New session 10 of user core. Oct 30 00:09:29.585455 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:09:29.929230 kubelet[3160]: E1030 00:09:29.929141 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:09:30.098593 sshd[5256]: Connection closed by 10.200.16.10 port 33722 Oct 30 00:09:30.098940 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:30.104774 systemd-logind[1672]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:09:30.105814 systemd[1]: sshd@7-10.200.8.44:22-10.200.16.10:33722.service: Deactivated successfully. Oct 30 00:09:30.108998 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:09:30.111250 systemd-logind[1672]: Removed session 10. Oct 30 00:09:30.930350 kubelet[3160]: E1030 00:09:30.929700 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:09:31.928780 kubelet[3160]: E1030 00:09:31.928686 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:09:33.929463 kubelet[3160]: E1030 00:09:33.928490 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:09:35.210002 systemd[1]: Started sshd@8-10.200.8.44:22-10.200.16.10:48024.service - OpenSSH per-connection server daemon (10.200.16.10:48024). Oct 30 00:09:35.847083 sshd[5270]: Accepted publickey for core from 10.200.16.10 port 48024 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:35.848096 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:35.851941 systemd-logind[1672]: New session 11 of user core. Oct 30 00:09:35.856400 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:09:35.928453 kubelet[3160]: E1030 00:09:35.928391 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:36.341986 sshd[5273]: Connection closed by 10.200.16.10 port 48024 Oct 30 00:09:36.342344 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:36.344905 systemd[1]: sshd@8-10.200.8.44:22-10.200.16.10:48024.service: Deactivated successfully. Oct 30 00:09:36.346457 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:09:36.347181 systemd-logind[1672]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:09:36.348333 systemd-logind[1672]: Removed session 11. Oct 30 00:09:39.929127 kubelet[3160]: E1030 00:09:39.929087 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:09:40.928443 kubelet[3160]: E1030 00:09:40.927719 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:09:41.452760 systemd[1]: Started sshd@9-10.200.8.44:22-10.200.16.10:49802.service - OpenSSH per-connection server daemon (10.200.16.10:49802). Oct 30 00:09:42.094418 sshd[5292]: Accepted publickey for core from 10.200.16.10 port 49802 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:42.095688 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:42.099894 systemd-logind[1672]: New session 12 of user core. Oct 30 00:09:42.107448 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:09:42.584449 sshd[5295]: Connection closed by 10.200.16.10 port 49802 Oct 30 00:09:42.585860 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:42.587957 systemd[1]: sshd@9-10.200.8.44:22-10.200.16.10:49802.service: Deactivated successfully. Oct 30 00:09:42.589818 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:09:42.590611 systemd-logind[1672]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:09:42.591793 systemd-logind[1672]: Removed session 12. Oct 30 00:09:42.693583 systemd[1]: Started sshd@10-10.200.8.44:22-10.200.16.10:49810.service - OpenSSH per-connection server daemon (10.200.16.10:49810). Oct 30 00:09:43.318202 sshd[5308]: Accepted publickey for core from 10.200.16.10 port 49810 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:43.320875 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:43.326679 systemd-logind[1672]: New session 13 of user core. Oct 30 00:09:43.332602 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:09:43.880863 sshd[5311]: Connection closed by 10.200.16.10 port 49810 Oct 30 00:09:43.881691 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:43.887541 systemd-logind[1672]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:09:43.889056 systemd[1]: sshd@10-10.200.8.44:22-10.200.16.10:49810.service: Deactivated successfully. Oct 30 00:09:43.891781 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:09:43.893848 systemd-logind[1672]: Removed session 13. Oct 30 00:09:43.928423 kubelet[3160]: E1030 00:09:43.928358 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:09:43.929389 kubelet[3160]: E1030 00:09:43.928697 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:09:43.999452 systemd[1]: Started sshd@11-10.200.8.44:22-10.200.16.10:49822.service - OpenSSH per-connection server daemon (10.200.16.10:49822). Oct 30 00:09:44.632302 sshd[5321]: Accepted publickey for core from 10.200.16.10 port 49822 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:44.633422 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:44.637171 systemd-logind[1672]: New session 14 of user core. Oct 30 00:09:44.642387 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:09:45.157029 sshd[5328]: Connection closed by 10.200.16.10 port 49822 Oct 30 00:09:45.159696 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:45.163649 systemd-logind[1672]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:09:45.164481 systemd[1]: sshd@11-10.200.8.44:22-10.200.16.10:49822.service: Deactivated successfully. Oct 30 00:09:45.167070 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:09:45.170806 systemd-logind[1672]: Removed session 14. Oct 30 00:09:45.927790 kubelet[3160]: E1030 00:09:45.927754 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:09:46.141580 containerd[1697]: time="2025-10-30T00:09:46.141531866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"0595d5fc18590477e726f5930599536e8216fe8060a14ff95d17818c8691c44a\" pid:5354 exit_status:1 exited_at:{seconds:1761782986 nanos:141149389}" Oct 30 00:09:46.929770 kubelet[3160]: E1030 00:09:46.929564 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:50.271850 systemd[1]: Started sshd@12-10.200.8.44:22-10.200.16.10:33066.service - OpenSSH per-connection server daemon (10.200.16.10:33066). Oct 30 00:09:50.898703 sshd[5382]: Accepted publickey for core from 10.200.16.10 port 33066 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:50.899599 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:50.903574 systemd-logind[1672]: New session 15 of user core. Oct 30 00:09:50.912444 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:09:51.413941 sshd[5392]: Connection closed by 10.200.16.10 port 33066 Oct 30 00:09:51.414441 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:51.417901 systemd[1]: sshd@12-10.200.8.44:22-10.200.16.10:33066.service: Deactivated successfully. Oct 30 00:09:51.420227 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:09:51.422670 systemd-logind[1672]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:09:51.424026 systemd-logind[1672]: Removed session 15. Oct 30 00:09:51.928578 containerd[1697]: time="2025-10-30T00:09:51.928546700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:09:52.172668 containerd[1697]: time="2025-10-30T00:09:52.172637909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:52.176231 containerd[1697]: time="2025-10-30T00:09:52.176195113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:09:52.176348 containerd[1697]: time="2025-10-30T00:09:52.176199872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:09:52.176393 kubelet[3160]: E1030 00:09:52.176365 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:09:52.176662 kubelet[3160]: E1030 00:09:52.176403 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:09:52.176662 kubelet[3160]: E1030 00:09:52.176508 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e70db6a829a44c9bf10fd58f8144dc1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:52.178559 containerd[1697]: time="2025-10-30T00:09:52.178538017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:09:52.419855 containerd[1697]: time="2025-10-30T00:09:52.419826106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:52.422732 containerd[1697]: time="2025-10-30T00:09:52.422698642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:09:52.422820 containerd[1697]: time="2025-10-30T00:09:52.422759095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:09:52.423386 kubelet[3160]: E1030 00:09:52.422855 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:09:52.423386 kubelet[3160]: E1030 00:09:52.422915 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:09:52.423386 kubelet[3160]: E1030 00:09:52.423142 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcpf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64fcb4dd76-8t4c6_calico-system(b8c2d711-e3d2-49d2-9ce4-f8ddd389b734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:52.424392 kubelet[3160]: E1030 00:09:52.424348 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:09:52.930811 containerd[1697]: time="2025-10-30T00:09:52.930620350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:09:53.201508 containerd[1697]: time="2025-10-30T00:09:53.201419577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:53.205249 containerd[1697]: time="2025-10-30T00:09:53.205179665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:09:53.205454 containerd[1697]: time="2025-10-30T00:09:53.205307059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:53.205699 kubelet[3160]: E1030 00:09:53.205627 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:09:53.205699 kubelet[3160]: E1030 00:09:53.205682 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:09:53.206519 kubelet[3160]: E1030 00:09:53.206150 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndm4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dlvqm_calico-system(69b7796b-0241-4712-b4ee-3f03c5de49ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:53.207846 kubelet[3160]: E1030 00:09:53.207814 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:09:56.529126 systemd[1]: Started sshd@13-10.200.8.44:22-10.200.16.10:33072.service - OpenSSH per-connection server daemon (10.200.16.10:33072). Oct 30 00:09:56.928834 containerd[1697]: time="2025-10-30T00:09:56.928742404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:09:57.165466 sshd[5403]: Accepted publickey for core from 10.200.16.10 port 33072 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:09:57.166180 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:09:57.170323 systemd-logind[1672]: New session 16 of user core. Oct 30 00:09:57.174394 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:09:57.200153 containerd[1697]: time="2025-10-30T00:09:57.200088806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:57.203105 containerd[1697]: time="2025-10-30T00:09:57.203079291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:09:57.203105 containerd[1697]: time="2025-10-30T00:09:57.203119921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:57.203232 kubelet[3160]: E1030 00:09:57.203208 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:57.203467 kubelet[3160]: E1030 00:09:57.203244 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:57.203719 kubelet[3160]: E1030 00:09:57.203667 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-44scr_calico-apiserver(b7023f33-bcd5-455f-bb39-ef094539fe80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:57.204877 kubelet[3160]: E1030 00:09:57.204839 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:09:57.666951 sshd[5406]: Connection closed by 10.200.16.10 port 33072 Oct 30 00:09:57.667293 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Oct 30 00:09:57.669760 systemd[1]: sshd@13-10.200.8.44:22-10.200.16.10:33072.service: Deactivated successfully. Oct 30 00:09:57.672688 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:09:57.675146 systemd-logind[1672]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:09:57.676334 systemd-logind[1672]: Removed session 16. Oct 30 00:09:57.930868 containerd[1697]: time="2025-10-30T00:09:57.930416136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:09:58.168897 containerd[1697]: time="2025-10-30T00:09:58.168787904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:58.172922 containerd[1697]: time="2025-10-30T00:09:58.172884473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:09:58.173087 containerd[1697]: time="2025-10-30T00:09:58.172931629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:09:58.173298 kubelet[3160]: E1030 00:09:58.173204 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:09:58.173298 kubelet[3160]: E1030 00:09:58.173263 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:09:58.173665 kubelet[3160]: E1030 00:09:58.173621 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wz2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-ffb6d876d-8qgfk_calico-system(edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:58.176169 kubelet[3160]: E1030 00:09:58.176143 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:09:58.929422 containerd[1697]: time="2025-10-30T00:09:58.929342525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:09:59.169082 containerd[1697]: time="2025-10-30T00:09:59.168987130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:59.173293 containerd[1697]: time="2025-10-30T00:09:59.173193415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:09:59.173293 containerd[1697]: time="2025-10-30T00:09:59.173263856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:09:59.173789 kubelet[3160]: E1030 00:09:59.173508 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:09:59.173789 kubelet[3160]: E1030 00:09:59.173552 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:09:59.173789 kubelet[3160]: E1030 00:09:59.173703 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:59.174525 containerd[1697]: time="2025-10-30T00:09:59.174411529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:09:59.418906 containerd[1697]: time="2025-10-30T00:09:59.418868378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:59.426293 containerd[1697]: time="2025-10-30T00:09:59.426224434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:09:59.426434 containerd[1697]: time="2025-10-30T00:09:59.426388424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:09:59.426598 kubelet[3160]: E1030 00:09:59.426539 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:59.426598 kubelet[3160]: E1030 00:09:59.426585 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:09:59.426911 kubelet[3160]: E1030 00:09:59.426873 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzbqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d4dc65c88-vhhsm_calico-apiserver(4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:59.427731 containerd[1697]: time="2025-10-30T00:09:59.427676886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:09:59.429164 kubelet[3160]: E1030 00:09:59.429124 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:09:59.677691 containerd[1697]: time="2025-10-30T00:09:59.677597971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:09:59.686137 containerd[1697]: time="2025-10-30T00:09:59.685957423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:09:59.686226 containerd[1697]: time="2025-10-30T00:09:59.686142425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:09:59.686250 kubelet[3160]: E1030 00:09:59.686210 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:09:59.686301 kubelet[3160]: E1030 00:09:59.686245 3160 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:09:59.686457 kubelet[3160]: E1030 00:09:59.686381 3160 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr2xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fgwld_calico-system(a3b96faf-6434-4c32-bdb2-a83d279f75ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:09:59.687624 kubelet[3160]: E1030 00:09:59.687595 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:10:02.781314 systemd[1]: Started sshd@14-10.200.8.44:22-10.200.16.10:45554.service - OpenSSH per-connection server daemon (10.200.16.10:45554). Oct 30 00:10:03.417034 sshd[5418]: Accepted publickey for core from 10.200.16.10 port 45554 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:03.418364 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:03.422193 systemd-logind[1672]: New session 17 of user core. Oct 30 00:10:03.427390 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:10:03.908830 sshd[5421]: Connection closed by 10.200.16.10 port 45554 Oct 30 00:10:03.909947 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:03.912929 systemd[1]: sshd@14-10.200.8.44:22-10.200.16.10:45554.service: Deactivated successfully. Oct 30 00:10:03.915311 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:10:03.916715 systemd-logind[1672]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:10:03.918559 systemd-logind[1672]: Removed session 17. Oct 30 00:10:03.928411 kubelet[3160]: E1030 00:10:03.928044 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:10:04.021616 systemd[1]: Started sshd@15-10.200.8.44:22-10.200.16.10:45556.service - OpenSSH per-connection server daemon (10.200.16.10:45556). Oct 30 00:10:04.661344 sshd[5433]: Accepted publickey for core from 10.200.16.10 port 45556 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:04.662447 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:04.667821 systemd-logind[1672]: New session 18 of user core. Oct 30 00:10:04.674772 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:10:05.216160 sshd[5436]: Connection closed by 10.200.16.10 port 45556 Oct 30 00:10:05.216559 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:05.221871 systemd[1]: sshd@15-10.200.8.44:22-10.200.16.10:45556.service: Deactivated successfully. Oct 30 00:10:05.223868 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:10:05.225236 systemd-logind[1672]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:10:05.227592 systemd-logind[1672]: Removed session 18. Oct 30 00:10:05.325679 systemd[1]: Started sshd@16-10.200.8.44:22-10.200.16.10:45564.service - OpenSSH per-connection server daemon (10.200.16.10:45564). Oct 30 00:10:05.955685 sshd[5446]: Accepted publickey for core from 10.200.16.10 port 45564 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:05.956088 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:05.962036 systemd-logind[1672]: New session 19 of user core. Oct 30 00:10:05.969073 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:10:06.771358 sshd[5449]: Connection closed by 10.200.16.10 port 45564 Oct 30 00:10:06.771801 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:06.774761 systemd[1]: sshd@16-10.200.8.44:22-10.200.16.10:45564.service: Deactivated successfully. Oct 30 00:10:06.776508 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:10:06.777445 systemd-logind[1672]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:10:06.778773 systemd-logind[1672]: Removed session 19. Oct 30 00:10:06.883850 systemd[1]: Started sshd@17-10.200.8.44:22-10.200.16.10:45566.service - OpenSSH per-connection server daemon (10.200.16.10:45566). Oct 30 00:10:07.514550 sshd[5466]: Accepted publickey for core from 10.200.16.10 port 45566 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:07.515051 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:07.521038 systemd-logind[1672]: New session 20 of user core. Oct 30 00:10:07.526422 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:10:07.931266 kubelet[3160]: E1030 00:10:07.931132 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:10:08.089459 sshd[5471]: Connection closed by 10.200.16.10 port 45566 Oct 30 00:10:08.090500 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:08.093247 systemd[1]: sshd@17-10.200.8.44:22-10.200.16.10:45566.service: Deactivated successfully. Oct 30 00:10:08.095175 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:10:08.095930 systemd-logind[1672]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:10:08.097052 systemd-logind[1672]: Removed session 20. Oct 30 00:10:08.210581 systemd[1]: Started sshd@18-10.200.8.44:22-10.200.16.10:45580.service - OpenSSH per-connection server daemon (10.200.16.10:45580). Oct 30 00:10:08.842680 sshd[5481]: Accepted publickey for core from 10.200.16.10 port 45580 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:08.844767 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:08.852176 systemd-logind[1672]: New session 21 of user core. Oct 30 00:10:08.856417 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:10:09.354056 sshd[5484]: Connection closed by 10.200.16.10 port 45580 Oct 30 00:10:09.354995 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:09.358529 systemd[1]: sshd@18-10.200.8.44:22-10.200.16.10:45580.service: Deactivated successfully. Oct 30 00:10:09.360296 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:10:09.362073 systemd-logind[1672]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:10:09.364783 systemd-logind[1672]: Removed session 21. Oct 30 00:10:10.931030 kubelet[3160]: E1030 00:10:10.930988 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:10:11.928762 kubelet[3160]: E1030 00:10:11.928694 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:10:11.930383 kubelet[3160]: E1030 00:10:11.930294 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:10:12.930007 kubelet[3160]: E1030 00:10:12.929387 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:10:14.469480 systemd[1]: Started sshd@19-10.200.8.44:22-10.200.16.10:51358.service - OpenSSH per-connection server daemon (10.200.16.10:51358). Oct 30 00:10:14.929898 kubelet[3160]: E1030 00:10:14.929479 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:10:15.098914 sshd[5496]: Accepted publickey for core from 10.200.16.10 port 51358 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:15.099793 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:15.103506 systemd-logind[1672]: New session 22 of user core. Oct 30 00:10:15.106441 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:10:15.609352 sshd[5501]: Connection closed by 10.200.16.10 port 51358 Oct 30 00:10:15.612445 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:15.616087 systemd-logind[1672]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:10:15.616901 systemd[1]: sshd@19-10.200.8.44:22-10.200.16.10:51358.service: Deactivated successfully. Oct 30 00:10:15.620137 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:10:15.623692 systemd-logind[1672]: Removed session 22. Oct 30 00:10:16.146017 containerd[1697]: time="2025-10-30T00:10:16.145904888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ac3cdf897676be012c8985b6c0a5860618a2ae53e36a1aad04446b9d6a9bb68\" id:\"08ab9abcca8d75e10cab91ce19ad942cc089d8fdb7d7f152ff3b91358ddbe587\" pid:5524 exited_at:{seconds:1761783016 nanos:145429309}" Oct 30 00:10:18.931991 kubelet[3160]: E1030 00:10:18.931776 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:10:20.722158 systemd[1]: Started sshd@20-10.200.8.44:22-10.200.16.10:52306.service - OpenSSH per-connection server daemon (10.200.16.10:52306). Oct 30 00:10:21.349952 sshd[5541]: Accepted publickey for core from 10.200.16.10 port 52306 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:21.350835 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:21.354617 systemd-logind[1672]: New session 23 of user core. Oct 30 00:10:21.359408 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:10:21.873246 sshd[5544]: Connection closed by 10.200.16.10 port 52306 Oct 30 00:10:21.873675 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:21.877186 systemd-logind[1672]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:10:21.878139 systemd[1]: sshd@20-10.200.8.44:22-10.200.16.10:52306.service: Deactivated successfully. Oct 30 00:10:21.880943 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:10:21.882942 systemd-logind[1672]: Removed session 23. Oct 30 00:10:22.929876 kubelet[3160]: E1030 00:10:22.929840 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:10:22.931999 kubelet[3160]: E1030 00:10:22.931952 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:10:24.929713 kubelet[3160]: E1030 00:10:24.929675 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:10:26.929561 kubelet[3160]: E1030 00:10:26.929520 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf" Oct 30 00:10:26.985695 systemd[1]: Started sshd@21-10.200.8.44:22-10.200.16.10:52318.service - OpenSSH per-connection server daemon (10.200.16.10:52318). Oct 30 00:10:27.626051 sshd[5558]: Accepted publickey for core from 10.200.16.10 port 52318 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:27.627021 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:27.630992 systemd-logind[1672]: New session 24 of user core. Oct 30 00:10:27.634467 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:10:28.136232 sshd[5561]: Connection closed by 10.200.16.10 port 52318 Oct 30 00:10:28.136657 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:28.139331 systemd[1]: sshd@21-10.200.8.44:22-10.200.16.10:52318.service: Deactivated successfully. Oct 30 00:10:28.140924 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:10:28.141591 systemd-logind[1672]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:10:28.143012 systemd-logind[1672]: Removed session 24. Oct 30 00:10:29.928786 kubelet[3160]: E1030 00:10:29.928446 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dlvqm" podUID="69b7796b-0241-4712-b4ee-3f03c5de49ac" Oct 30 00:10:30.928983 kubelet[3160]: E1030 00:10:30.928870 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64fcb4dd76-8t4c6" podUID="b8c2d711-e3d2-49d2-9ce4-f8ddd389b734" Oct 30 00:10:33.252509 systemd[1]: Started sshd@22-10.200.8.44:22-10.200.16.10:56910.service - OpenSSH per-connection server daemon (10.200.16.10:56910). Oct 30 00:10:33.881573 sshd[5573]: Accepted publickey for core from 10.200.16.10 port 56910 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:33.882589 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:33.886453 systemd-logind[1672]: New session 25 of user core. Oct 30 00:10:33.890465 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:10:34.366717 sshd[5576]: Connection closed by 10.200.16.10 port 56910 Oct 30 00:10:34.368073 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:34.370572 systemd-logind[1672]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:10:34.371939 systemd[1]: sshd@22-10.200.8.44:22-10.200.16.10:56910.service: Deactivated successfully. Oct 30 00:10:34.373993 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:10:34.376433 systemd-logind[1672]: Removed session 25. Oct 30 00:10:34.929722 kubelet[3160]: E1030 00:10:34.929686 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-44scr" podUID="b7023f33-bcd5-455f-bb39-ef094539fe80" Oct 30 00:10:36.929481 kubelet[3160]: E1030 00:10:36.929364 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fgwld" podUID="a3b96faf-6434-4c32-bdb2-a83d279f75ef" Oct 30 00:10:37.928925 kubelet[3160]: E1030 00:10:37.928851 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-ffb6d876d-8qgfk" podUID="edd2e4ea-71b9-4fa2-9387-fc17f5b6fe6d" Oct 30 00:10:39.479362 systemd[1]: Started sshd@23-10.200.8.44:22-10.200.16.10:56920.service - OpenSSH per-connection server daemon (10.200.16.10:56920). Oct 30 00:10:40.119751 sshd[5588]: Accepted publickey for core from 10.200.16.10 port 56920 ssh2: RSA SHA256:+HWrfFEe9Wp2vUn2UgQ6L8Pu49TvkjpjrJ2z7oRZ0Dg Oct 30 00:10:40.120606 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:10:40.124837 systemd-logind[1672]: New session 26 of user core. Oct 30 00:10:40.130398 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 00:10:40.613871 sshd[5592]: Connection closed by 10.200.16.10 port 56920 Oct 30 00:10:40.614383 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Oct 30 00:10:40.617790 systemd-logind[1672]: Session 26 logged out. Waiting for processes to exit. Oct 30 00:10:40.618359 systemd[1]: sshd@23-10.200.8.44:22-10.200.16.10:56920.service: Deactivated successfully. Oct 30 00:10:40.619944 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 00:10:40.621000 systemd-logind[1672]: Removed session 26. Oct 30 00:10:40.931805 kubelet[3160]: E1030 00:10:40.931740 3160 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d4dc65c88-vhhsm" podUID="4462c777-3a7c-4ea5-8cfd-9b0d8e8807cf"