Nov 24 00:22:55.982646 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:54:38 -00 2025 Nov 24 00:22:55.982671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:55.982682 kernel: BIOS-provided physical RAM map: Nov 24 00:22:55.982688 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:22:55.982694 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 24 00:22:55.982700 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 24 00:22:55.982708 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 24 00:22:55.982714 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 24 00:22:55.982721 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 24 00:22:55.982728 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 24 00:22:55.982735 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 24 00:22:55.982741 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 24 00:22:55.982747 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 24 00:22:55.982753 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 24 00:22:55.982761 kernel: NX (Execute Disable) protection: active Nov 24 00:22:55.982769 kernel: APIC: Static calls initialized Nov 24 00:22:55.982776 kernel: efi: EFI v2.7 by Microsoft Nov 24 00:22:55.982783 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9da518 RNG=0x3ffd2018 Nov 24 00:22:55.982789 kernel: random: crng init done Nov 24 00:22:55.982796 kernel: secureboot: Secure boot disabled Nov 24 00:22:55.982803 kernel: SMBIOS 3.1.0 present. Nov 24 00:22:55.982809 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 24 00:22:55.982817 kernel: DMI: Memory slots populated: 2/2 Nov 24 00:22:55.982823 kernel: Hypervisor detected: Microsoft Hyper-V Nov 24 00:22:55.982829 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 24 00:22:55.982835 kernel: Hyper-V: Nested features: 0x3e0101 Nov 24 00:22:55.982843 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 24 00:22:55.982849 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 24 00:22:55.982854 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:22:55.982860 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:22:55.982865 kernel: tsc: Detected 2299.999 MHz processor Nov 24 00:22:55.982870 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:22:55.982877 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:22:55.982883 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 24 00:22:55.982890 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:22:55.982896 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:22:55.982904 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 24 00:22:55.982910 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 24 00:22:55.982916 kernel: Using GB pages for direct mapping Nov 24 00:22:55.982922 kernel: ACPI: Early table checksum verification disabled Nov 24 00:22:55.982938 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 24 00:22:55.982945 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.982953 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.982959 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 24 00:22:55.982966 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 24 00:22:55.982973 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.982980 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.982987 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.982996 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:22:55.983005 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:22:55.983011 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:22:55.983018 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 24 00:22:55.983025 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 24 00:22:55.983032 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 24 00:22:55.983039 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 24 00:22:55.983045 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 24 00:22:55.983052 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 24 00:22:55.983059 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 24 00:22:55.986107 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 24 00:22:55.986124 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 24 00:22:55.986132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 24 00:22:55.986140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 24 00:22:55.986148 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 24 00:22:55.986156 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 24 00:22:55.986164 kernel: Zone ranges: Nov 24 00:22:55.986171 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:22:55.986179 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 24 00:22:55.986189 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:22:55.986197 kernel: Device empty Nov 24 00:22:55.986204 kernel: Movable zone start for each node Nov 24 00:22:55.986212 kernel: Early memory node ranges Nov 24 00:22:55.986220 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:22:55.986227 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 24 00:22:55.986235 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 24 00:22:55.986242 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 24 00:22:55.986249 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:22:55.986259 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 24 00:22:55.986266 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:22:55.986274 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:22:55.986281 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 24 00:22:55.986289 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 24 00:22:55.986296 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 24 00:22:55.986304 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:22:55.986311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:22:55.986319 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:22:55.986328 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 24 00:22:55.986336 kernel: TSC deadline timer available Nov 24 00:22:55.986343 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:22:55.986350 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:22:55.986358 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:22:55.986365 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:22:55.986372 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:22:55.986380 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:22:55.986387 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:22:55.986395 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 24 00:22:55.986404 kernel: Booting paravirtualized kernel on Hyper-V Nov 24 00:22:55.986412 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:22:55.986419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:22:55.986427 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:22:55.986434 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:22:55.986442 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:22:55.986449 kernel: Hyper-V: PV spinlocks enabled Nov 24 00:22:55.986457 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:22:55.986468 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:55.986476 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 24 00:22:55.986483 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:22:55.986491 kernel: Fallback order for Node 0: 0 Nov 24 00:22:55.986498 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 24 00:22:55.986506 kernel: Policy zone: Normal Nov 24 00:22:55.986513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:22:55.986520 kernel: software IO TLB: area num 2. Nov 24 00:22:55.986528 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:22:55.986537 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:22:55.986545 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:22:55.986552 kernel: Dynamic Preempt: voluntary Nov 24 00:22:55.986560 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:22:55.986568 kernel: rcu: RCU event tracing is enabled. Nov 24 00:22:55.986576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:22:55.986590 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:22:55.986599 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:22:55.986607 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:22:55.986616 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:22:55.986624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:22:55.986633 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:22:55.986641 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:22:55.986649 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:22:55.986658 kernel: Using NULL legacy PIC Nov 24 00:22:55.986666 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 24 00:22:55.986675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:22:55.986685 kernel: Console: colour dummy device 80x25 Nov 24 00:22:55.986693 kernel: printk: legacy console [tty1] enabled Nov 24 00:22:55.986701 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:22:55.986708 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 24 00:22:55.986717 kernel: ACPI: Core revision 20240827 Nov 24 00:22:55.986724 kernel: Failed to register legacy timer interrupt Nov 24 00:22:55.986732 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:22:55.986740 kernel: x2apic enabled Nov 24 00:22:55.986749 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:22:55.986757 kernel: Hyper-V: Host Build 10.0.26100.1421-1-0 Nov 24 00:22:55.986765 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 24 00:22:55.986773 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 24 00:22:55.986781 kernel: Hyper-V: Using IPI hypercalls Nov 24 00:22:55.986789 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 24 00:22:55.986797 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 24 00:22:55.986805 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 24 00:22:55.986813 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 24 00:22:55.986822 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 24 00:22:55.986830 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 24 00:22:55.986838 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 24 00:22:55.986847 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 24 00:22:55.986855 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:22:55.986863 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 00:22:55.986870 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 00:22:55.986878 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:22:55.986886 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:22:55.986893 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:22:55.986903 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:22:55.986911 kernel: RETBleed: Vulnerable Nov 24 00:22:55.986919 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:22:55.986926 kernel: active return thunk: its_return_thunk Nov 24 00:22:55.986934 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:22:55.986941 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:22:55.986949 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:22:55.986957 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:22:55.986964 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:22:55.986972 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:22:55.986981 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:22:55.986989 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 24 00:22:55.986997 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 24 00:22:55.987004 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 24 00:22:55.987012 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:22:55.987020 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 24 00:22:55.987027 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 24 00:22:55.987035 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 24 00:22:55.987042 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 24 00:22:55.987050 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 24 00:22:55.987058 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 24 00:22:55.987102 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 24 00:22:55.987112 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:22:55.987120 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:22:55.987128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:22:55.987136 kernel: landlock: Up and running. Nov 24 00:22:55.987143 kernel: SELinux: Initializing. Nov 24 00:22:55.987151 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:22:55.987159 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:22:55.987166 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 24 00:22:55.987174 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 24 00:22:55.987182 kernel: signal: max sigframe size: 11952 Nov 24 00:22:55.987191 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:22:55.987200 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:22:55.987208 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:22:55.987216 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:22:55.987224 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:22:55.987232 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:22:55.987240 kernel: .... node #0, CPUs: #1 Nov 24 00:22:55.987248 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:22:55.987256 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 24 00:22:55.987265 kernel: Memory: 8068828K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308184K reserved, 0K cma-reserved) Nov 24 00:22:55.987275 kernel: devtmpfs: initialized Nov 24 00:22:55.987282 kernel: x86/mm: Memory block size: 128MB Nov 24 00:22:55.987290 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 24 00:22:55.987299 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:22:55.987307 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:22:55.987315 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:22:55.987323 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:22:55.987331 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:22:55.987339 kernel: audit: type=2000 audit(1763943772.029:1): state=initialized audit_enabled=0 res=1 Nov 24 00:22:55.987348 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:22:55.987356 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:22:55.987364 kernel: cpuidle: using governor menu Nov 24 00:22:55.987372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:22:55.987380 kernel: dca service started, version 1.12.1 Nov 24 00:22:55.987387 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 24 00:22:55.987395 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 24 00:22:55.987403 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:22:55.987413 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:22:55.987421 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:22:55.987429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:22:55.987437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:22:55.987445 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:22:55.987452 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:22:55.987460 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:22:55.987468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:22:55.987476 kernel: ACPI: Interpreter enabled Nov 24 00:22:55.987486 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:22:55.987494 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:22:55.987501 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:22:55.987509 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 24 00:22:55.987517 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 24 00:22:55.987525 kernel: iommu: Default domain type: Translated Nov 24 00:22:55.987533 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:22:55.987541 kernel: efivars: Registered efivars operations Nov 24 00:22:55.987548 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:22:55.987556 kernel: PCI: System does not support PCI Nov 24 00:22:55.987566 kernel: vgaarb: loaded Nov 24 00:22:55.987574 kernel: clocksource: Switched to clocksource tsc-early Nov 24 00:22:55.987581 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:22:55.987589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:22:55.987597 kernel: pnp: PnP ACPI init Nov 24 00:22:55.987605 kernel: pnp: PnP ACPI: found 3 devices Nov 24 00:22:55.987613 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:22:55.987621 kernel: NET: Registered PF_INET protocol family Nov 24 00:22:55.987629 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:22:55.987639 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 24 00:22:55.987647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:22:55.987655 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:22:55.987663 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 24 00:22:55.987670 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 24 00:22:55.987678 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:22:55.987686 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:22:55.987694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:22:55.987704 kernel: NET: Registered PF_XDP protocol family Nov 24 00:22:55.987712 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:22:55.987720 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 00:22:55.987728 kernel: software IO TLB: mapped [mem 0x000000003a9da000-0x000000003e9da000] (64MB) Nov 24 00:22:55.987737 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 24 00:22:55.987746 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 24 00:22:55.987755 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 24 00:22:55.987764 kernel: clocksource: Switched to clocksource tsc Nov 24 00:22:55.987773 kernel: Initialise system trusted keyrings Nov 24 00:22:55.987783 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 24 00:22:55.987792 kernel: Key type asymmetric registered Nov 24 00:22:55.987801 kernel: Asymmetric key parser 'x509' registered Nov 24 00:22:55.987810 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:22:55.987819 kernel: io scheduler mq-deadline registered Nov 24 00:22:55.987828 kernel: io scheduler kyber registered Nov 24 00:22:55.987837 kernel: io scheduler bfq registered Nov 24 00:22:55.987845 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:22:55.987855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:22:55.987865 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:22:55.987874 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 24 00:22:55.987883 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:22:55.987892 kernel: i8042: PNP: No PS/2 controller found. Nov 24 00:22:55.988025 kernel: rtc_cmos 00:02: registered as rtc0 Nov 24 00:22:55.989298 kernel: rtc_cmos 00:02: setting system clock to 2025-11-24T00:22:55 UTC (1763943775) Nov 24 00:22:55.989376 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 24 00:22:55.989386 kernel: intel_pstate: Intel P-state driver initializing Nov 24 00:22:55.989399 kernel: efifb: probing for efifb Nov 24 00:22:55.989406 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 24 00:22:55.989414 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 24 00:22:55.989421 kernel: efifb: scrolling: redraw Nov 24 00:22:55.989429 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:22:55.989437 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:22:55.989445 kernel: fb0: EFI VGA frame buffer device Nov 24 00:22:55.989452 kernel: pstore: Using crash dump compression: deflate Nov 24 00:22:55.989460 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:22:55.989469 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:22:55.989477 kernel: Segment Routing with IPv6 Nov 24 00:22:55.989484 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:22:55.989492 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:22:55.989499 kernel: Key type dns_resolver registered Nov 24 00:22:55.989507 kernel: IPI shorthand broadcast: enabled Nov 24 00:22:55.989515 kernel: sched_clock: Marking stable (2828064656, 102870314)->(3290929868, -359994898) Nov 24 00:22:55.989523 kernel: registered taskstats version 1 Nov 24 00:22:55.989530 kernel: Loading compiled-in X.509 certificates Nov 24 00:22:55.989539 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 5d380f93d180914be04be8068ab300f495c35900' Nov 24 00:22:55.989547 kernel: Demotion targets for Node 0: null Nov 24 00:22:55.989555 kernel: Key type .fscrypt registered Nov 24 00:22:55.989562 kernel: Key type fscrypt-provisioning registered Nov 24 00:22:55.989569 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:22:55.989577 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:22:55.989584 kernel: ima: No architecture policies found Nov 24 00:22:55.989592 kernel: clk: Disabling unused clocks Nov 24 00:22:55.989599 kernel: Warning: unable to open an initial console. Nov 24 00:22:55.989608 kernel: Freeing unused kernel image (initmem) memory: 46188K Nov 24 00:22:55.989616 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:22:55.989623 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:22:55.989631 kernel: Run /init as init process Nov 24 00:22:55.989638 kernel: with arguments: Nov 24 00:22:55.989645 kernel: /init Nov 24 00:22:55.989653 kernel: with environment: Nov 24 00:22:55.989660 kernel: HOME=/ Nov 24 00:22:55.989667 kernel: TERM=linux Nov 24 00:22:55.989678 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:22:55.989689 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:22:55.989698 systemd[1]: Detected virtualization microsoft. Nov 24 00:22:55.989706 systemd[1]: Detected architecture x86-64. Nov 24 00:22:55.989713 systemd[1]: Running in initrd. Nov 24 00:22:55.989721 systemd[1]: No hostname configured, using default hostname. Nov 24 00:22:55.989729 systemd[1]: Hostname set to . Nov 24 00:22:55.989739 systemd[1]: Initializing machine ID from random generator. Nov 24 00:22:55.989747 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:22:55.989755 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:22:55.989763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:22:55.989772 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:22:55.989780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:22:55.989788 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:22:55.989799 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:22:55.989809 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:22:55.989818 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:22:55.989827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:22:55.989835 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:22:55.989844 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:22:55.989853 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:22:55.989861 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:22:55.989872 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:22:55.989881 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:22:55.989890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:22:55.989899 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:22:55.989908 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:22:55.989917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:22:55.989925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:22:55.989934 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:22:55.989943 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:22:55.989953 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:22:55.989962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:22:55.989971 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:22:55.989980 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:22:55.989989 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:22:55.989998 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:22:55.990008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:22:55.990017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:55.990034 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:22:55.990046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:22:55.990055 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:22:55.990082 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:22:55.990107 systemd-journald[185]: Collecting audit messages is disabled. Nov 24 00:22:55.990130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:55.990142 systemd-journald[185]: Journal started Nov 24 00:22:55.990165 systemd-journald[185]: Runtime Journal (/run/log/journal/34b82c63ff2d4ccd877856fb10bd4467) is 8M, max 158.6M, 150.6M free. Nov 24 00:22:55.981910 systemd-modules-load[187]: Inserted module 'overlay' Nov 24 00:22:55.999080 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:22:56.000381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:22:56.011928 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:22:56.009176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:22:56.015394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:22:56.019318 kernel: Bridge firewalling registered Nov 24 00:22:56.016112 systemd-modules-load[187]: Inserted module 'br_netfilter' Nov 24 00:22:56.020503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:22:56.025538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:22:56.031866 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:22:56.036328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:22:56.040011 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:22:56.047136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:22:56.053587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:22:56.056166 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:22:56.069084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:22:56.075164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:22:56.088314 systemd-resolved[217]: Positive Trust Anchors: Nov 24 00:22:56.089888 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:22:56.089985 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:22:56.111786 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:56.092475 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 24 00:22:56.095340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:22:56.123992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:22:56.163085 kernel: SCSI subsystem initialized Nov 24 00:22:56.170080 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:22:56.179096 kernel: iscsi: registered transport (tcp) Nov 24 00:22:56.195132 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:22:56.195165 kernel: QLogic iSCSI HBA Driver Nov 24 00:22:56.206941 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:22:56.216143 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:22:56.218316 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:22:56.248084 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:22:56.251399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:22:56.298084 kernel: raid6: avx512x4 gen() 46086 MB/s Nov 24 00:22:56.316084 kernel: raid6: avx512x2 gen() 44901 MB/s Nov 24 00:22:56.334079 kernel: raid6: avx512x1 gen() 27682 MB/s Nov 24 00:22:56.353077 kernel: raid6: avx2x4 gen() 38978 MB/s Nov 24 00:22:56.370078 kernel: raid6: avx2x2 gen() 42183 MB/s Nov 24 00:22:56.388281 kernel: raid6: avx2x1 gen() 31042 MB/s Nov 24 00:22:56.388293 kernel: raid6: using algorithm avx512x4 gen() 46086 MB/s Nov 24 00:22:56.407476 kernel: raid6: .... xor() 7703 MB/s, rmw enabled Nov 24 00:22:56.407500 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:22:56.424157 kernel: xor: automatically using best checksumming function avx Nov 24 00:22:56.535081 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:22:56.539806 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:22:56.543996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:22:56.563234 systemd-udevd[436]: Using default interface naming scheme 'v255'. Nov 24 00:22:56.566769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:22:56.571894 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:22:56.592094 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Nov 24 00:22:56.608498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:22:56.611190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:22:56.641647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:22:56.646645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:22:56.688099 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:22:56.695081 kernel: hv_vmbus: Vmbus version:5.3 Nov 24 00:22:56.707081 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 24 00:22:56.719086 kernel: hv_vmbus: registering driver hv_storvsc Nov 24 00:22:56.724084 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 24 00:22:56.724118 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 24 00:22:56.727083 kernel: scsi host0: storvsc_host_t Nov 24 00:22:56.727324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:22:56.727939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:56.740138 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 24 00:22:56.740175 kernel: hv_vmbus: registering driver hv_pci Nov 24 00:22:56.740186 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 24 00:22:56.734532 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:56.743374 kernel: hv_vmbus: registering driver hv_netvsc Nov 24 00:22:56.749127 kernel: AES CTR mode by8 optimization enabled Nov 24 00:22:56.749158 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 24 00:22:56.750448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:56.760420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:22:56.768228 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 24 00:22:56.768255 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 24 00:22:56.768417 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 24 00:22:56.760543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:56.766011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:56.782090 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:22:56.782224 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 24 00:22:56.788250 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 24 00:22:56.788287 kernel: hv_vmbus: registering driver hid_hyperv Nov 24 00:22:56.798425 kernel: PTP clock support registered Nov 24 00:22:56.802548 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 24 00:22:56.808676 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 24 00:22:56.812085 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9ca212 (unnamed net_device) (uninitialized): VF slot 1 added Nov 24 00:22:56.812243 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:22:56.822525 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 24 00:22:56.830547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:56.842139 kernel: hv_utils: Registering HyperV Utility Driver Nov 24 00:22:56.842170 kernel: hv_vmbus: registering driver hv_utils Nov 24 00:22:56.517820 kernel: hv_utils: Shutdown IC version 3.2 Nov 24 00:22:56.525038 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 24 00:22:56.526012 kernel: hv_utils: TimeSync IC version 4.0 Nov 24 00:22:56.526027 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 00:22:56.526035 kernel: hv_utils: Heartbeat IC version 3.0 Nov 24 00:22:56.526044 systemd-journald[185]: Time jumped backwards, rotating. Nov 24 00:22:56.517795 systemd-resolved[217]: Clock change detected. Flushing caches. Nov 24 00:22:56.534170 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 24 00:22:56.538072 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 24 00:22:56.538282 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 24 00:22:56.559172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:22:56.573267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:22:56.688216 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:22:56.694167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:22:57.013172 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:22:57.242869 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 24 00:22:57.298127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:22:57.312841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 24 00:22:57.354391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:22:57.354867 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:22:57.355086 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:22:57.360825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:22:57.366957 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:22:57.371203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:22:57.377737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:22:57.384463 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:22:57.399084 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:22:57.405195 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:22:57.508167 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 24 00:22:57.514165 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 24 00:22:57.518169 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 24 00:22:57.524171 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:22:57.543168 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 24 00:22:57.543206 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 24 00:22:57.543217 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 24 00:22:57.543226 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 24 00:22:57.564438 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:22:57.564548 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 24 00:22:57.564650 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 24 00:22:57.567916 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 24 00:22:57.578165 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 24 00:22:57.581190 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9ca212 eth0: VF registering: eth1 Nov 24 00:22:57.581343 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 24 00:22:57.583714 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 24 00:22:58.415171 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:22:58.415357 disk-uuid[653]: The operation has completed successfully. Nov 24 00:22:58.464181 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:22:58.464271 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:22:58.501089 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:22:58.515130 sh[694]: Success Nov 24 00:22:58.546232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:22:58.546281 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:22:58.547113 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:22:58.555168 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:22:58.858451 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:22:58.865046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:22:58.882039 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:22:58.896168 kernel: BTRFS: device fsid c993ebd2-0e38-4cfc-8615-2c75294bea72 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (707) Nov 24 00:22:58.898370 kernel: BTRFS info (device dm-0): first mount of filesystem c993ebd2-0e38-4cfc-8615-2c75294bea72 Nov 24 00:22:58.898455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:59.211889 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:22:59.211985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:22:59.213465 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:22:59.252797 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:22:59.255405 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:22:59.258717 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:22:59.259322 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:22:59.279834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:22:59.300186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (730) Nov 24 00:22:59.303588 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:59.303701 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:59.330385 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:22:59.330434 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:22:59.331583 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:22:59.337172 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:59.338334 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:22:59.344402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:22:59.360314 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:22:59.363264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:22:59.391230 systemd-networkd[876]: lo: Link UP Nov 24 00:22:59.391238 systemd-networkd[876]: lo: Gained carrier Nov 24 00:22:59.393569 systemd-networkd[876]: Enumeration completed Nov 24 00:22:59.394787 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:22:59.401207 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:22:59.394842 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:22:59.407389 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:22:59.407585 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9ca212 eth0: Data path switched to VF: enP30832s1 Nov 24 00:22:59.394844 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:22:59.397842 systemd[1]: Reached target network.target - Network. Nov 24 00:22:59.408319 systemd-networkd[876]: enP30832s1: Link UP Nov 24 00:22:59.408381 systemd-networkd[876]: eth0: Link UP Nov 24 00:22:59.408800 systemd-networkd[876]: eth0: Gained carrier Nov 24 00:22:59.408811 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:22:59.411259 systemd-networkd[876]: enP30832s1: Gained carrier Nov 24 00:22:59.434178 systemd-networkd[876]: eth0: DHCPv4 address 10.200.0.20/24, gateway 10.200.0.1 acquired from 168.63.129.16 Nov 24 00:23:00.525047 ignition[849]: Ignition 2.22.0 Nov 24 00:23:00.525060 ignition[849]: Stage: fetch-offline Nov 24 00:23:00.525186 ignition[849]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:00.526855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:23:00.525193 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:00.525284 ignition[849]: parsed url from cmdline: "" Nov 24 00:23:00.525287 ignition[849]: no config URL provided Nov 24 00:23:00.525291 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:23:00.525297 ignition[849]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:23:00.525301 ignition[849]: failed to fetch config: resource requires networking Nov 24 00:23:00.525522 ignition[849]: Ignition finished successfully Nov 24 00:23:00.540514 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:23:00.578000 ignition[886]: Ignition 2.22.0 Nov 24 00:23:00.578009 ignition[886]: Stage: fetch Nov 24 00:23:00.578227 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:00.578234 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:00.578317 ignition[886]: parsed url from cmdline: "" Nov 24 00:23:00.578320 ignition[886]: no config URL provided Nov 24 00:23:00.578329 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:23:00.578334 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:23:00.578352 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 24 00:23:00.646041 ignition[886]: GET result: OK Nov 24 00:23:00.646104 ignition[886]: config has been read from IMDS userdata Nov 24 00:23:00.646129 ignition[886]: parsing config with SHA512: 2bbaef05ea6f817089999a186e616b4c45069bd87711501df6eb6480718736f5cfdbe37efb27032e458050bfb88ba3d90489c4a01393f69ca92f9564b6a0693b Nov 24 00:23:00.650418 unknown[886]: fetched base config from "system" Nov 24 00:23:00.650434 unknown[886]: fetched base config from "system" Nov 24 00:23:00.650905 ignition[886]: fetch: fetch complete Nov 24 00:23:00.650440 unknown[886]: fetched user config from "azure" Nov 24 00:23:00.650910 ignition[886]: fetch: fetch passed Nov 24 00:23:00.652701 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:23:00.650945 ignition[886]: Ignition finished successfully Nov 24 00:23:00.658273 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:23:00.686319 ignition[892]: Ignition 2.22.0 Nov 24 00:23:00.686327 ignition[892]: Stage: kargs Nov 24 00:23:00.688508 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:23:00.686501 ignition[892]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:00.691306 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:23:00.686508 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:00.687184 ignition[892]: kargs: kargs passed Nov 24 00:23:00.687213 ignition[892]: Ignition finished successfully Nov 24 00:23:00.723235 ignition[898]: Ignition 2.22.0 Nov 24 00:23:00.723242 ignition[898]: Stage: disks Nov 24 00:23:00.725066 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:23:00.723436 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:00.728430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:23:00.723443 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:00.732541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:23:00.724129 ignition[898]: disks: disks passed Nov 24 00:23:00.733967 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:23:00.724203 ignition[898]: Ignition finished successfully Nov 24 00:23:00.741597 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:23:00.746501 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:23:00.751502 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:23:00.826109 systemd-fsck[906]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 24 00:23:00.829892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:23:00.834626 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:23:01.011262 systemd-networkd[876]: eth0: Gained IPv6LL Nov 24 00:23:01.144177 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d9d0447-100f-4769-adb5-76fdba966eb2 r/w with ordered data mode. Quota mode: none. Nov 24 00:23:01.144736 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:23:01.148576 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:23:01.171568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:23:01.181229 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:23:01.186330 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 24 00:23:01.188307 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:23:01.198240 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (915) Nov 24 00:23:01.198261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:23:01.198270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:23:01.188337 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:23:01.202291 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:23:01.204225 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:23:01.204235 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:23:01.208145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:23:01.210355 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:23:01.213623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:23:01.868129 coreos-metadata[917]: Nov 24 00:23:01.868 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:23:01.873383 coreos-metadata[917]: Nov 24 00:23:01.873 INFO Fetch successful Nov 24 00:23:01.876214 coreos-metadata[917]: Nov 24 00:23:01.874 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:23:01.885348 coreos-metadata[917]: Nov 24 00:23:01.885 INFO Fetch successful Nov 24 00:23:01.917337 coreos-metadata[917]: Nov 24 00:23:01.917 INFO wrote hostname ci-4459.1.2-a-d148bafb83 to /sysroot/etc/hostname Nov 24 00:23:01.919509 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:23:02.146955 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:23:02.190212 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:23:02.228319 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:23:02.250632 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:23:03.367033 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:23:03.368704 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:23:03.371261 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:23:03.389292 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:23:03.393742 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:23:03.409015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:23:03.417849 ignition[1035]: INFO : Ignition 2.22.0 Nov 24 00:23:03.417849 ignition[1035]: INFO : Stage: mount Nov 24 00:23:03.421230 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:03.421230 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:03.421230 ignition[1035]: INFO : mount: mount passed Nov 24 00:23:03.421230 ignition[1035]: INFO : Ignition finished successfully Nov 24 00:23:03.422615 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:23:03.427454 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:23:03.439690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:23:03.462167 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1046) Nov 24 00:23:03.462198 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:23:03.464169 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:23:03.469687 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:23:03.469715 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:23:03.471543 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:23:03.472766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:23:03.498980 ignition[1062]: INFO : Ignition 2.22.0 Nov 24 00:23:03.498980 ignition[1062]: INFO : Stage: files Nov 24 00:23:03.505207 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:03.505207 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:03.505207 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:23:03.505207 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:23:03.505207 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:23:03.556555 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:23:03.558586 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:23:03.558586 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:23:03.556850 unknown[1062]: wrote ssh authorized keys file for user: core Nov 24 00:23:03.594927 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:23:03.598495 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:23:33.609836 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz": dial tcp 13.107.213.67:443: i/o timeout Nov 24 00:23:33.810254 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #2 Nov 24 00:23:48.832852 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:23:48.867827 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:23:48.870872 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:23:48.873284 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:23:48.875695 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:23:48.878107 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:23:48.881279 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:23:48.881279 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:23:48.881279 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:23:48.881279 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:23:48.892667 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:23:48.892667 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:23:48.892667 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:23:48.902503 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:23:48.902503 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:23:48.902503 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:23:49.034908 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:23:49.194642 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:23:49.194642 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:23:49.254295 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:23:49.261539 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:23:49.261539 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:23:49.261539 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:23:49.273873 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:23:49.273873 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:23:49.273873 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:23:49.273873 ignition[1062]: INFO : files: files passed Nov 24 00:23:49.273873 ignition[1062]: INFO : Ignition finished successfully Nov 24 00:23:49.265717 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:23:49.270144 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:23:49.276329 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:23:49.286953 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:23:49.287026 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:23:49.316507 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:49.316507 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:49.327384 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:49.320240 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:23:49.322244 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:23:49.329870 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:23:49.361669 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:23:49.361756 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:23:49.366141 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:23:49.370199 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:23:49.373233 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:23:49.378077 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:23:49.387885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:23:49.391590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:23:49.405172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:23:49.408292 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:23:49.413317 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:23:49.417340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:23:49.417473 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:23:49.417842 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:23:49.418183 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:23:49.418470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:23:49.418781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:23:49.419089 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:23:49.419476 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:23:49.420042 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:23:49.420651 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:23:49.421229 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:23:49.421606 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:23:49.421852 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:23:49.422131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:23:49.422245 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:23:49.422777 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:23:49.423147 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:23:49.423428 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:23:49.423738 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:23:49.424046 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:23:49.424165 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:23:49.503147 ignition[1117]: INFO : Ignition 2.22.0 Nov 24 00:23:49.503147 ignition[1117]: INFO : Stage: umount Nov 24 00:23:49.503147 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:49.503147 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:23:49.503147 ignition[1117]: INFO : umount: umount passed Nov 24 00:23:49.503147 ignition[1117]: INFO : Ignition finished successfully Nov 24 00:23:49.444253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:23:49.444406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:23:49.447443 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:23:49.447554 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:23:49.452315 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 24 00:23:49.452397 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:23:49.457399 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:23:49.471549 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:23:49.490283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:23:49.490426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:23:49.496647 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:23:49.496752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:23:49.505571 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:23:49.505651 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:23:49.510364 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:23:49.510438 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:23:49.519916 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:23:49.519963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:23:49.523331 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:23:49.523370 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:23:49.526294 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:23:49.526329 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:23:49.529197 systemd[1]: Stopped target network.target - Network. Nov 24 00:23:49.533199 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:23:49.533232 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:23:49.535430 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:23:49.538191 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:23:49.539453 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:23:49.543196 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:23:49.547191 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:23:49.551211 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:23:49.551242 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:23:49.555220 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:23:49.646265 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9ca212 eth0: Data path switched from VF: enP30832s1 Nov 24 00:23:49.646423 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:23:49.555245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:23:49.557170 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:23:49.557212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:23:49.561023 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:23:49.561330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:23:49.567292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:23:49.570607 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:23:49.574625 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:23:49.575126 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:23:49.575281 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:23:49.579331 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:23:49.579517 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:23:49.579605 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:23:49.583353 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:23:49.584269 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:23:49.586009 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:23:49.586039 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:23:49.587411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:23:49.593778 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:23:49.594830 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:23:49.600234 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:23:49.600278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:23:49.602392 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:23:49.602429 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:23:49.604821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:23:49.604864 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:23:49.610625 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:23:49.615641 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:23:49.615680 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:23:49.620496 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:23:49.620621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:23:49.627278 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:23:49.627344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:23:49.633672 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:23:49.633736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:23:49.637269 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:23:49.637306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:23:49.642243 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:23:49.642286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:23:49.646428 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:23:49.646474 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:23:49.657618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:23:49.657669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:23:49.664199 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:23:49.664243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:23:49.669783 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:23:49.687663 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:23:49.687721 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:23:49.695656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:23:49.695700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:23:49.701850 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 24 00:23:49.702742 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:23:49.705503 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:23:49.705981 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:23:49.710389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:23:49.710428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:23:49.717553 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:23:49.717591 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 24 00:23:49.717616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:23:49.717642 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:23:49.717872 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:23:49.717936 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:23:49.722370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:23:49.722434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:23:49.726502 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:23:49.731261 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:23:49.751476 systemd[1]: Switching root. Nov 24 00:23:49.854735 systemd-journald[185]: Journal stopped Nov 24 00:23:54.761435 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 24 00:23:54.761468 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:23:54.761559 kernel: SELinux: policy capability open_perms=1 Nov 24 00:23:54.761568 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:23:54.761575 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:23:54.761582 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:23:54.761647 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:23:54.761655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:23:54.761665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:23:54.761731 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:23:54.761739 kernel: audit: type=1403 audit(1763943831.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:23:54.761748 systemd[1]: Successfully loaded SELinux policy in 188.906ms. Nov 24 00:23:54.761813 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.857ms. Nov 24 00:23:54.761823 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:23:54.761888 systemd[1]: Detected virtualization microsoft. Nov 24 00:23:54.761896 systemd[1]: Detected architecture x86-64. Nov 24 00:23:54.761958 systemd[1]: Detected first boot. Nov 24 00:23:54.761968 systemd[1]: Hostname set to . Nov 24 00:23:54.761976 systemd[1]: Initializing machine ID from random generator. Nov 24 00:23:54.762039 zram_generator::config[1159]: No configuration found. Nov 24 00:23:54.762051 kernel: Guest personality initialized and is inactive Nov 24 00:23:54.762112 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Nov 24 00:23:54.762121 kernel: Initialized host personality Nov 24 00:23:54.762205 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:23:54.762217 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:23:54.762228 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:23:54.762239 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:23:54.762248 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:23:54.762261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:23:54.762273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:23:54.762283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:23:54.762294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:23:54.762303 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:23:54.762316 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:23:54.762326 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:23:54.762337 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:23:54.762346 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:23:54.762357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:23:54.762369 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:23:54.762379 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:23:54.762394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:23:54.762405 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:23:54.762417 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:23:54.762429 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:23:54.762440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:23:54.762450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:23:54.762463 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:23:54.762471 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:23:54.762481 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:23:54.762490 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:23:54.762501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:23:54.762513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:23:54.762523 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:23:54.762534 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:23:54.762544 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:23:54.762555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:23:54.762571 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:23:54.762585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:23:54.762596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:23:54.762609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:23:54.762619 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:23:54.762631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:23:54.762642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:23:54.762657 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:23:54.762670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:54.762679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:23:54.762689 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:23:54.762699 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:23:54.762711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:23:54.762722 systemd[1]: Reached target machines.target - Containers. Nov 24 00:23:54.762732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:23:54.762741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:54.762752 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:23:54.762760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:23:54.762769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:54.762778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:23:54.763958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:54.763968 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:23:54.763974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:54.763981 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:23:54.764027 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:23:54.764034 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:23:54.764039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:23:54.764045 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:23:54.764051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:54.764058 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:23:54.764063 kernel: loop: module loaded Nov 24 00:23:54.764069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:23:54.764076 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:23:54.764082 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:23:54.764088 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:23:54.764094 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:23:54.764100 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:23:54.764105 systemd[1]: Stopped verity-setup.service. Nov 24 00:23:54.764111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:54.764117 kernel: fuse: init (API version 7.41) Nov 24 00:23:54.764122 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:23:54.764130 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:23:54.764137 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:23:54.764143 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:23:54.764176 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:23:54.764184 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:23:54.764215 systemd-journald[1256]: Collecting audit messages is disabled. Nov 24 00:23:54.764236 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:23:54.764245 systemd-journald[1256]: Journal started Nov 24 00:23:54.764265 systemd-journald[1256]: Runtime Journal (/run/log/journal/846b4175f55945beb601b67bd7373a26) is 8M, max 158.6M, 150.6M free. Nov 24 00:23:54.308907 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:23:54.317575 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:23:54.317943 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:23:54.767174 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:23:54.769756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:23:54.771630 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:23:54.771774 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:23:54.775422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:54.775573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:54.779400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:54.779542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:54.781438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:23:54.781586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:23:54.785412 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:54.785585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:54.789540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:23:54.792309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:23:54.797190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:23:54.799908 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:23:54.815351 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:23:54.820233 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:23:54.832228 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:23:54.836413 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:23:54.836442 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:23:54.840940 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:23:54.847266 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:23:54.850577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:54.859168 kernel: ACPI: bus type drm_connector registered Nov 24 00:23:54.868817 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:23:54.872342 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:23:54.873861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:23:54.878246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:23:54.879663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:23:54.880574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:23:54.884476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:23:54.888974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:23:54.893760 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:23:54.902555 systemd-journald[1256]: Time spent on flushing to /var/log/journal/846b4175f55945beb601b67bd7373a26 is 31.144ms for 991 entries. Nov 24 00:23:54.902555 systemd-journald[1256]: System Journal (/var/log/journal/846b4175f55945beb601b67bd7373a26) is 11.9M, max 2.6G, 2.6G free. Nov 24 00:23:54.994445 systemd-journald[1256]: Received client request to flush runtime journal. Nov 24 00:23:54.994486 systemd-journald[1256]: /var/log/journal/846b4175f55945beb601b67bd7373a26/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Nov 24 00:23:54.994507 systemd-journald[1256]: Rotating system journal. Nov 24 00:23:54.994525 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 00:23:54.904009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:23:54.907676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:23:54.910732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:23:54.913536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:23:54.925005 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:23:54.928384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:23:54.931250 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:23:54.995469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:23:54.999018 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 24 00:23:54.999032 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 24 00:23:55.001763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:23:55.003697 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:23:55.007445 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:23:55.013288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:23:55.097753 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:23:55.102387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:23:55.119651 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 00:23:55.119851 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 00:23:55.122505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:23:55.319216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:23:55.411184 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:23:55.441182 kernel: loop1: detected capacity change from 0 to 27936 Nov 24 00:23:55.609553 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:23:55.612111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:23:55.648339 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 24 00:23:55.884172 kernel: loop2: detected capacity change from 0 to 229808 Nov 24 00:23:55.886071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:23:55.891279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:23:55.970738 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:23:55.975271 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:23:55.997166 kernel: loop3: detected capacity change from 0 to 128560 Nov 24 00:23:56.038798 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:23:56.063166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:23:56.067172 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:23:56.095175 kernel: hv_vmbus: registering driver hv_balloon Nov 24 00:23:56.102389 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 24 00:23:56.113287 kernel: hv_vmbus: registering driver hyperv_fb Nov 24 00:23:56.113339 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 24 00:23:56.115682 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 24 00:23:56.117668 kernel: Console: switching to colour dummy device 80x25 Nov 24 00:23:56.121525 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:23:56.206725 systemd-networkd[1331]: lo: Link UP Nov 24 00:23:56.206737 systemd-networkd[1331]: lo: Gained carrier Nov 24 00:23:56.208667 systemd-networkd[1331]: Enumeration completed Nov 24 00:23:56.208738 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:23:56.211631 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:23:56.211643 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:23:56.214181 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:23:56.215324 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:23:56.219257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:23:56.225217 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:23:56.233735 systemd-networkd[1331]: enP30832s1: Link UP Nov 24 00:23:56.233805 systemd-networkd[1331]: eth0: Link UP Nov 24 00:23:56.234242 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9ca212 eth0: Data path switched to VF: enP30832s1 Nov 24 00:23:56.233808 systemd-networkd[1331]: eth0: Gained carrier Nov 24 00:23:56.233823 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:23:56.238260 systemd-networkd[1331]: enP30832s1: Gained carrier Nov 24 00:23:56.246197 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.0.20/24, gateway 10.200.0.1 acquired from 168.63.129.16 Nov 24 00:23:56.252230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:23:56.310330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:23:56.310525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:23:56.315078 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:23:56.319379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:23:56.326518 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:23:56.360416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:23:56.368962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:23:56.447334 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:23:56.450912 kernel: loop4: detected capacity change from 0 to 110984 Nov 24 00:23:56.458184 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 24 00:23:56.466203 kernel: loop5: detected capacity change from 0 to 27936 Nov 24 00:23:56.482169 kernel: loop6: detected capacity change from 0 to 229808 Nov 24 00:23:56.500169 kernel: loop7: detected capacity change from 0 to 128560 Nov 24 00:23:56.509605 (sd-merge)[1425]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 24 00:23:56.509962 (sd-merge)[1425]: Merged extensions into '/usr'. Nov 24 00:23:56.513256 systemd[1]: Reload requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:23:56.513268 systemd[1]: Reloading... Nov 24 00:23:56.574192 zram_generator::config[1460]: No configuration found. Nov 24 00:23:56.750183 systemd[1]: Reloading finished in 236 ms. Nov 24 00:23:56.765673 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:23:56.771040 systemd[1]: Starting ensure-sysext.service... Nov 24 00:23:56.775261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:23:56.791360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:23:56.796634 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:23:56.796727 systemd[1]: Reloading... Nov 24 00:23:56.805259 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:23:56.805288 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:23:56.805485 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:23:56.805694 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:23:56.806370 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:23:56.806594 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 24 00:23:56.806638 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 24 00:23:56.813283 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:23:56.813291 systemd-tmpfiles[1515]: Skipping /boot Nov 24 00:23:56.817876 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:23:56.817887 systemd-tmpfiles[1515]: Skipping /boot Nov 24 00:23:56.857177 zram_generator::config[1548]: No configuration found. Nov 24 00:23:57.017830 systemd[1]: Reloading finished in 220 ms. Nov 24 00:23:57.038688 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:23:57.047949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.048999 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:23:57.060857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:23:57.063466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:57.066330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:57.074303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:57.076592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:57.081334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:57.081571 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:57.092873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:23:57.097361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:23:57.100221 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:23:57.103242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.105881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:57.108299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:57.111643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:57.111856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:57.113581 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:57.113677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:57.120499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.121704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:57.123351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:57.126681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:57.132333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:57.134346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:57.134467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:57.134570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.148132 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:57.148710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:57.152935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:57.153680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:57.157026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:57.157396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:57.161411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:23:57.168007 systemd[1]: Finished ensure-sysext.service. Nov 24 00:23:57.174243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.174413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:57.175139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:23:57.179314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:57.179350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:57.179381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:23:57.179414 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:23:57.179441 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:23:57.184275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:57.188730 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:23:57.188858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:23:57.205044 augenrules[1651]: No rules Nov 24 00:23:57.205739 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:23:57.205948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:23:57.244630 systemd-resolved[1620]: Positive Trust Anchors: Nov 24 00:23:57.244641 systemd-resolved[1620]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:23:57.244669 systemd-resolved[1620]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:23:57.248745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:23:57.262052 systemd-resolved[1620]: Using system hostname 'ci-4459.1.2-a-d148bafb83'. Nov 24 00:23:57.263400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:23:57.267255 systemd[1]: Reached target network.target - Network. Nov 24 00:23:57.268398 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:23:57.459301 systemd-networkd[1331]: eth0: Gained IPv6LL Nov 24 00:23:57.461055 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:23:57.464476 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:23:57.953671 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:23:57.955511 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:24:01.584258 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:24:01.592579 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:24:01.595119 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:24:01.610480 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:24:01.613391 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:24:01.614749 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:24:01.616401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:24:01.619217 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:24:01.622305 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:24:01.624456 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:24:01.625973 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:24:01.630213 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:24:01.630243 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:24:01.632228 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:24:01.636544 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:24:01.638768 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:24:01.641348 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:24:01.644365 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:24:01.645937 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:24:01.654579 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:24:01.658401 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:24:01.660316 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:24:01.663805 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:24:01.666195 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:24:01.667362 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:24:01.667384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:24:01.669120 systemd[1]: Starting chronyd.service - NTP client/server... Nov 24 00:24:01.673229 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:24:01.675748 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:24:01.678518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:24:01.681721 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:24:01.688888 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:24:01.691712 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:24:01.693603 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:24:01.700683 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:24:01.704379 jq[1670]: false Nov 24 00:24:01.704245 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 24 00:24:01.706201 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 24 00:24:01.708673 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 24 00:24:01.710291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:01.717341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:24:01.721353 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:24:01.725484 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:24:01.731755 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:24:01.738242 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:24:01.748254 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:24:01.753231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:24:01.753639 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:24:01.754206 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:24:01.758203 extend-filesystems[1671]: Found /dev/nvme0n1p6 Nov 24 00:24:01.761224 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:24:01.763217 KVP[1673]: KVP starting; pid is:1673 Nov 24 00:24:01.766466 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:24:01.769372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:24:01.769562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:24:01.776190 kernel: hv_utils: KVP IC version 4.0 Nov 24 00:24:01.776601 chronyd[1665]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 24 00:24:01.782760 KVP[1673]: KVP LIC Version: 3.1 Nov 24 00:24:01.784355 extend-filesystems[1671]: Found /dev/nvme0n1p9 Nov 24 00:24:01.786503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:24:01.789394 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:24:01.790586 extend-filesystems[1671]: Checking size of /dev/nvme0n1p9 Nov 24 00:24:01.803273 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing passwd entry cache Nov 24 00:24:01.798302 oslogin_cache_refresh[1672]: Refreshing passwd entry cache Nov 24 00:24:01.808051 jq[1691]: true Nov 24 00:24:01.807187 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:24:01.807363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:24:01.812001 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting users, quitting Nov 24 00:24:01.811998 oslogin_cache_refresh[1672]: Failure getting users, quitting Nov 24 00:24:01.812083 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:24:01.812083 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing group entry cache Nov 24 00:24:01.812012 oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:24:01.812049 oslogin_cache_refresh[1672]: Refreshing group entry cache Nov 24 00:24:01.817417 (ntainerd)[1712]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:24:01.824476 extend-filesystems[1671]: Old size kept for /dev/nvme0n1p9 Nov 24 00:24:01.826202 chronyd[1665]: Timezone right/UTC failed leap second check, ignoring Nov 24 00:24:01.826334 chronyd[1665]: Loaded seccomp filter (level 2) Nov 24 00:24:01.827724 systemd[1]: Started chronyd.service - NTP client/server. Nov 24 00:24:01.830444 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:24:01.831804 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:24:01.836474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:24:01.838255 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting groups, quitting Nov 24 00:24:01.838255 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:24:01.837333 oslogin_cache_refresh[1672]: Failure getting groups, quitting Nov 24 00:24:01.838346 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:24:01.837341 oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:24:01.838499 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:24:01.840286 update_engine[1690]: I20251124 00:24:01.839205 1690 main.cc:92] Flatcar Update Engine starting Nov 24 00:24:01.857024 jq[1717]: true Nov 24 00:24:01.892551 tar[1698]: linux-amd64/LICENSE Nov 24 00:24:01.892551 tar[1698]: linux-amd64/helm Nov 24 00:24:01.914017 systemd-logind[1686]: New seat seat0. Nov 24 00:24:01.919880 systemd-logind[1686]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:24:01.920023 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:24:01.970973 bash[1744]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:24:01.971582 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:24:01.977617 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 00:24:02.045588 dbus-daemon[1668]: [system] SELinux support is enabled Nov 24 00:24:02.047145 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:24:02.057817 update_engine[1690]: I20251124 00:24:02.057643 1690 update_check_scheduler.cc:74] Next update check in 7m44s Nov 24 00:24:02.059002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:24:02.059391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:24:02.061943 dbus-daemon[1668]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:24:02.063263 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:24:02.063281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:24:02.066337 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:24:02.082403 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:24:02.153342 coreos-metadata[1667]: Nov 24 00:24:02.153 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:24:02.160520 coreos-metadata[1667]: Nov 24 00:24:02.160 INFO Fetch successful Nov 24 00:24:02.160520 coreos-metadata[1667]: Nov 24 00:24:02.160 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 24 00:24:02.165366 coreos-metadata[1667]: Nov 24 00:24:02.164 INFO Fetch successful Nov 24 00:24:02.168292 coreos-metadata[1667]: Nov 24 00:24:02.168 INFO Fetching http://168.63.129.16/machine/bb84c9c0-a8b5-472e-b2a8-2c4be47f4b0d/7714fc7f%2Dc2f3%2D44c1%2D91a7%2D17ca40db42a9.%5Fci%2D4459.1.2%2Da%2Dd148bafb83?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 24 00:24:02.169635 coreos-metadata[1667]: Nov 24 00:24:02.169 INFO Fetch successful Nov 24 00:24:02.169635 coreos-metadata[1667]: Nov 24 00:24:02.169 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:24:02.177629 coreos-metadata[1667]: Nov 24 00:24:02.177 INFO Fetch successful Nov 24 00:24:02.219104 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:24:02.220837 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:24:02.253808 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:24:02.312796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:24:02.317464 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:24:02.320941 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 24 00:24:02.343043 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:24:02.343287 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:24:02.353621 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:24:02.365716 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 24 00:24:02.375372 locksmithd[1771]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:24:02.393342 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:24:02.397644 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:24:02.404305 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:24:02.410413 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:24:02.469498 tar[1698]: linux-amd64/README.md Nov 24 00:24:02.485035 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:24:03.020423 containerd[1712]: time="2025-11-24T00:24:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:24:03.022179 containerd[1712]: time="2025-11-24T00:24:03.021403817Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:24:03.031927 containerd[1712]: time="2025-11-24T00:24:03.031888109Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.699µs" Nov 24 00:24:03.032038 containerd[1712]: time="2025-11-24T00:24:03.032024478Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:24:03.032084 containerd[1712]: time="2025-11-24T00:24:03.032075879Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:24:03.032246 containerd[1712]: time="2025-11-24T00:24:03.032236720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:24:03.032301 containerd[1712]: time="2025-11-24T00:24:03.032281487Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:24:03.032333 containerd[1712]: time="2025-11-24T00:24:03.032305396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032388 containerd[1712]: time="2025-11-24T00:24:03.032362817Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032388 containerd[1712]: time="2025-11-24T00:24:03.032380284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032578 containerd[1712]: time="2025-11-24T00:24:03.032557885Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032578 containerd[1712]: time="2025-11-24T00:24:03.032572178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032634 containerd[1712]: time="2025-11-24T00:24:03.032583283Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032634 containerd[1712]: time="2025-11-24T00:24:03.032591579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:24:03.032684 containerd[1712]: time="2025-11-24T00:24:03.032658873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.032833110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.032862354Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.032873194Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.032915620Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.033198187Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:24:03.033580 containerd[1712]: time="2025-11-24T00:24:03.033242580Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045376969Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045429798Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045446284Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045458386Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045470134Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045480660Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045496844Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045508848Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045524167Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045542980Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045553005Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045564920Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045667232Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:24:03.046010 containerd[1712]: time="2025-11-24T00:24:03.045687833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045701608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045717352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045728023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045737350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045747602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045756727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045766556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045774955Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045783730Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045823444Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045836572Z" level=info msg="Start snapshots syncer" Nov 24 00:24:03.046315 containerd[1712]: time="2025-11-24T00:24:03.045855042Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:24:03.046516 containerd[1712]: time="2025-11-24T00:24:03.046466977Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:24:03.046618 containerd[1712]: time="2025-11-24T00:24:03.046547052Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:24:03.046685 containerd[1712]: time="2025-11-24T00:24:03.046658615Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047670907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047801374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047821978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047834385Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047849350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047954711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047970654Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.047999572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:24:03.048173 containerd[1712]: time="2025-11-24T00:24:03.048013662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:24:03.048366 containerd[1712]: time="2025-11-24T00:24:03.048331386Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:24:03.048388 containerd[1712]: time="2025-11-24T00:24:03.048371953Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:24:03.048562 containerd[1712]: time="2025-11-24T00:24:03.048386336Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:24:03.048799 containerd[1712]: time="2025-11-24T00:24:03.048783033Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:24:03.048822 containerd[1712]: time="2025-11-24T00:24:03.048799729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:24:03.048822 containerd[1712]: time="2025-11-24T00:24:03.048807773Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:24:03.048854 containerd[1712]: time="2025-11-24T00:24:03.048820843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049181803Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049208390Z" level=info msg="runtime interface created" Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049214112Z" level=info msg="created NRI interface" Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049222645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049234765Z" level=info msg="Connect containerd service" Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049256969Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:24:03.050058 containerd[1712]: time="2025-11-24T00:24:03.049814055Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:24:03.201017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:03.213497 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:03.520231 containerd[1712]: time="2025-11-24T00:24:03.520179656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:24:03.520431 containerd[1712]: time="2025-11-24T00:24:03.520309277Z" level=info msg="Start subscribing containerd event" Nov 24 00:24:03.520431 containerd[1712]: time="2025-11-24T00:24:03.520359629Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:24:03.520431 containerd[1712]: time="2025-11-24T00:24:03.520370246Z" level=info msg="Start recovering state" Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520459913Z" level=info msg="Start event monitor" Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520470471Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520478120Z" level=info msg="Start streaming server" Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520486289Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520492874Z" level=info msg="runtime interface starting up..." Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520498722Z" level=info msg="starting plugins..." Nov 24 00:24:03.520509 containerd[1712]: time="2025-11-24T00:24:03.520509331Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:24:03.520642 containerd[1712]: time="2025-11-24T00:24:03.520601980Z" level=info msg="containerd successfully booted in 0.500687s" Nov 24 00:24:03.520709 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:24:03.522658 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:24:03.526247 systemd[1]: Startup finished in 2.950s (kernel) + 55.967s (initrd) + 12.188s (userspace) = 1min 11.106s. Nov 24 00:24:03.800752 login[1803]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 24 00:24:03.802682 login[1804]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:24:03.810894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:24:03.813669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:24:03.827770 systemd-logind[1686]: New session 1 of user core. Nov 24 00:24:03.833487 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:24:03.838381 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:24:03.852701 (systemd)[1843]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:24:03.858301 systemd-logind[1686]: New session c1 of user core. Nov 24 00:24:03.891543 kubelet[1827]: E1124 00:24:03.891492 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:03.893842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:03.894042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:03.894461 systemd[1]: kubelet.service: Consumed 882ms CPU time, 266.7M memory peak. Nov 24 00:24:04.027957 systemd[1843]: Queued start job for default target default.target. Nov 24 00:24:04.036819 systemd[1843]: Created slice app.slice - User Application Slice. Nov 24 00:24:04.036846 systemd[1843]: Reached target paths.target - Paths. Nov 24 00:24:04.036874 systemd[1843]: Reached target timers.target - Timers. Nov 24 00:24:04.037722 systemd[1843]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:24:04.045532 systemd[1843]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:24:04.045582 systemd[1843]: Reached target sockets.target - Sockets. Nov 24 00:24:04.045624 systemd[1843]: Reached target basic.target - Basic System. Nov 24 00:24:04.045686 systemd[1843]: Reached target default.target - Main User Target. Nov 24 00:24:04.045709 systemd[1843]: Startup finished in 183ms. Nov 24 00:24:04.045730 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:24:04.047283 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:24:04.313035 waagent[1799]: 2025-11-24T00:24:04.312966Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.313371Z INFO Daemon Daemon OS: flatcar 4459.1.2 Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.313453Z INFO Daemon Daemon Python: 3.11.13 Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.313741Z INFO Daemon Daemon Run daemon Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.313880Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.2' Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.314011Z INFO Daemon Daemon Using waagent for provisioning Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.314144Z INFO Daemon Daemon Activate resource disk Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.314241Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.315584Z INFO Daemon Daemon Found device: None Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.315744Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.315797Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.316407Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:24:04.320255 waagent[1799]: 2025-11-24T00:24:04.316581Z INFO Daemon Daemon Running default provisioning handler Nov 24 00:24:04.333128 waagent[1799]: 2025-11-24T00:24:04.332462Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 24 00:24:04.333782 waagent[1799]: 2025-11-24T00:24:04.333751Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 24 00:24:04.334240 waagent[1799]: 2025-11-24T00:24:04.334218Z INFO Daemon Daemon cloud-init is enabled: False Nov 24 00:24:04.334520 waagent[1799]: 2025-11-24T00:24:04.334504Z INFO Daemon Daemon Copying ovf-env.xml Nov 24 00:24:04.431577 waagent[1799]: 2025-11-24T00:24:04.431530Z INFO Daemon Daemon Successfully mounted dvd Nov 24 00:24:04.462975 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 24 00:24:04.464871 waagent[1799]: 2025-11-24T00:24:04.464818Z INFO Daemon Daemon Detect protocol endpoint Nov 24 00:24:04.468351 waagent[1799]: 2025-11-24T00:24:04.465302Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:24:04.468351 waagent[1799]: 2025-11-24T00:24:04.465601Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 24 00:24:04.468351 waagent[1799]: 2025-11-24T00:24:04.466180Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 24 00:24:04.468351 waagent[1799]: 2025-11-24T00:24:04.466559Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 24 00:24:04.468351 waagent[1799]: 2025-11-24T00:24:04.466785Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 24 00:24:04.477727 waagent[1799]: 2025-11-24T00:24:04.477697Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 24 00:24:04.479356 waagent[1799]: 2025-11-24T00:24:04.479339Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 24 00:24:04.479965 waagent[1799]: 2025-11-24T00:24:04.479773Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 24 00:24:04.628486 waagent[1799]: 2025-11-24T00:24:04.628407Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 24 00:24:04.629796 waagent[1799]: 2025-11-24T00:24:04.629284Z INFO Daemon Daemon Forcing an update of the goal state. Nov 24 00:24:04.632975 waagent[1799]: 2025-11-24T00:24:04.632939Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:24:04.668911 waagent[1799]: 2025-11-24T00:24:04.668879Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 24 00:24:04.670405 waagent[1799]: 2025-11-24T00:24:04.670369Z INFO Daemon Nov 24 00:24:04.670969 waagent[1799]: 2025-11-24T00:24:04.670785Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5f3d31ae-8f37-4c5e-9cf4-346fa6e80687 eTag: 5213986366753714048 source: Fabric] Nov 24 00:24:04.673654 waagent[1799]: 2025-11-24T00:24:04.673626Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 24 00:24:04.675014 waagent[1799]: 2025-11-24T00:24:04.674988Z INFO Daemon Nov 24 00:24:04.675691 waagent[1799]: 2025-11-24T00:24:04.675627Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:24:04.683161 waagent[1799]: 2025-11-24T00:24:04.683115Z INFO Daemon Daemon Downloading artifacts profile blob Nov 24 00:24:04.747445 waagent[1799]: 2025-11-24T00:24:04.747399Z INFO Daemon Downloaded certificate {'thumbprint': 'D5C99881696F59195CC854F1016DB901F13704D6', 'hasPrivateKey': True} Nov 24 00:24:04.749967 waagent[1799]: 2025-11-24T00:24:04.749937Z INFO Daemon Fetch goal state completed Nov 24 00:24:04.759544 waagent[1799]: 2025-11-24T00:24:04.759483Z INFO Daemon Daemon Starting provisioning Nov 24 00:24:04.760501 waagent[1799]: 2025-11-24T00:24:04.760108Z INFO Daemon Daemon Handle ovf-env.xml. Nov 24 00:24:04.760766 waagent[1799]: 2025-11-24T00:24:04.760738Z INFO Daemon Daemon Set hostname [ci-4459.1.2-a-d148bafb83] Nov 24 00:24:04.799714 waagent[1799]: 2025-11-24T00:24:04.799677Z INFO Daemon Daemon Publish hostname [ci-4459.1.2-a-d148bafb83] Nov 24 00:24:04.800031 waagent[1799]: 2025-11-24T00:24:04.800000Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 24 00:24:04.801073 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:24:04.802087 waagent[1799]: 2025-11-24T00:24:04.801800Z INFO Daemon Daemon Primary interface is [eth0] Nov 24 00:24:04.807589 systemd-logind[1686]: New session 2 of user core. Nov 24 00:24:04.809793 systemd-networkd[1331]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:24:04.809800 systemd-networkd[1331]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:24:04.809823 systemd-networkd[1331]: eth0: DHCP lease lost Nov 24 00:24:04.811180 waagent[1799]: 2025-11-24T00:24:04.810562Z INFO Daemon Daemon Create user account if not exists Nov 24 00:24:04.811597 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:24:04.812267 waagent[1799]: 2025-11-24T00:24:04.811839Z INFO Daemon Daemon User core already exists, skip useradd Nov 24 00:24:04.814241 waagent[1799]: 2025-11-24T00:24:04.812501Z INFO Daemon Daemon Configure sudoer Nov 24 00:24:04.825284 waagent[1799]: 2025-11-24T00:24:04.825237Z INFO Daemon Daemon Configure sshd Nov 24 00:24:04.827213 systemd-networkd[1331]: eth0: DHCPv4 address 10.200.0.20/24, gateway 10.200.0.1 acquired from 168.63.129.16 Nov 24 00:24:04.831415 waagent[1799]: 2025-11-24T00:24:04.828056Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 24 00:24:04.831415 waagent[1799]: 2025-11-24T00:24:04.828396Z INFO Daemon Daemon Deploy ssh public key. Nov 24 00:24:05.897378 waagent[1799]: 2025-11-24T00:24:05.897320Z INFO Daemon Daemon Provisioning complete Nov 24 00:24:05.914685 waagent[1799]: 2025-11-24T00:24:05.914646Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 24 00:24:05.916016 waagent[1799]: 2025-11-24T00:24:05.915937Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 24 00:24:05.918101 waagent[1799]: 2025-11-24T00:24:05.918070Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 24 00:24:06.015030 waagent[1894]: 2025-11-24T00:24:06.014965Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 24 00:24:06.015300 waagent[1894]: 2025-11-24T00:24:06.015054Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.2 Nov 24 00:24:06.015300 waagent[1894]: 2025-11-24T00:24:06.015094Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 24 00:24:06.015300 waagent[1894]: 2025-11-24T00:24:06.015133Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 24 00:24:06.057291 waagent[1894]: 2025-11-24T00:24:06.057245Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 24 00:24:06.057424 waagent[1894]: 2025-11-24T00:24:06.057399Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:24:06.057477 waagent[1894]: 2025-11-24T00:24:06.057450Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:24:06.065922 waagent[1894]: 2025-11-24T00:24:06.065871Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:24:06.070262 waagent[1894]: 2025-11-24T00:24:06.070234Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 24 00:24:06.070570 waagent[1894]: 2025-11-24T00:24:06.070542Z INFO ExtHandler Nov 24 00:24:06.070620 waagent[1894]: 2025-11-24T00:24:06.070590Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 00eecb35-f5dd-4706-9821-2b847f33ec45 eTag: 5213986366753714048 source: Fabric] Nov 24 00:24:06.070807 waagent[1894]: 2025-11-24T00:24:06.070783Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 24 00:24:06.071114 waagent[1894]: 2025-11-24T00:24:06.071090Z INFO ExtHandler Nov 24 00:24:06.071145 waagent[1894]: 2025-11-24T00:24:06.071126Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:24:06.074557 waagent[1894]: 2025-11-24T00:24:06.074529Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 24 00:24:06.154302 waagent[1894]: 2025-11-24T00:24:06.154233Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5C99881696F59195CC854F1016DB901F13704D6', 'hasPrivateKey': True} Nov 24 00:24:06.154591 waagent[1894]: 2025-11-24T00:24:06.154564Z INFO ExtHandler Fetch goal state completed Nov 24 00:24:06.166397 waagent[1894]: 2025-11-24T00:24:06.166354Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 24 00:24:06.170286 waagent[1894]: 2025-11-24T00:24:06.170243Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1894 Nov 24 00:24:06.170394 waagent[1894]: 2025-11-24T00:24:06.170371Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 24 00:24:06.170625 waagent[1894]: 2025-11-24T00:24:06.170600Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 24 00:24:06.171624 waagent[1894]: 2025-11-24T00:24:06.171593Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.2', '', 'Flatcar Container Linux by Kinvolk'] Nov 24 00:24:06.171890 waagent[1894]: 2025-11-24T00:24:06.171866Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 24 00:24:06.171983 waagent[1894]: 2025-11-24T00:24:06.171963Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 24 00:24:06.172385 waagent[1894]: 2025-11-24T00:24:06.172365Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 24 00:24:06.226465 waagent[1894]: 2025-11-24T00:24:06.226443Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 24 00:24:06.226582 waagent[1894]: 2025-11-24T00:24:06.226561Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 24 00:24:06.231578 waagent[1894]: 2025-11-24T00:24:06.231473Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 24 00:24:06.235979 systemd[1]: Reload requested from client PID 1909 ('systemctl') (unit waagent.service)... Nov 24 00:24:06.235990 systemd[1]: Reloading... Nov 24 00:24:06.302173 zram_generator::config[1951]: No configuration found. Nov 24 00:24:06.465003 systemd[1]: Reloading finished in 228 ms. Nov 24 00:24:06.491186 waagent[1894]: 2025-11-24T00:24:06.490360Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 24 00:24:06.491186 waagent[1894]: 2025-11-24T00:24:06.490469Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 24 00:24:06.509168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.538169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.542171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.545796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.560208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0xa1 status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.568356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#292 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.590165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#197 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.606644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.618162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#202 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:06.630166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:24:07.205635 waagent[1894]: 2025-11-24T00:24:07.205567Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 24 00:24:07.205951 waagent[1894]: 2025-11-24T00:24:07.205911Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 24 00:24:07.206632 waagent[1894]: 2025-11-24T00:24:07.206597Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 24 00:24:07.206964 waagent[1894]: 2025-11-24T00:24:07.206935Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 24 00:24:07.207212 waagent[1894]: 2025-11-24T00:24:07.207178Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 24 00:24:07.207295 waagent[1894]: 2025-11-24T00:24:07.207260Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:24:07.207407 waagent[1894]: 2025-11-24T00:24:07.207322Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:24:07.207407 waagent[1894]: 2025-11-24T00:24:07.207386Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 24 00:24:07.207464 waagent[1894]: 2025-11-24T00:24:07.207446Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:24:07.207586 waagent[1894]: 2025-11-24T00:24:07.207565Z INFO EnvHandler ExtHandler Configure routes Nov 24 00:24:07.207586 waagent[1894]: 2025-11-24T00:24:07.207630Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:24:07.207586 waagent[1894]: 2025-11-24T00:24:07.207789Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 24 00:24:07.207586 waagent[1894]: 2025-11-24T00:24:07.207916Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 24 00:24:07.207586 waagent[1894]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 24 00:24:07.207586 waagent[1894]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Nov 24 00:24:07.207586 waagent[1894]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 24 00:24:07.207586 waagent[1894]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:24:07.207586 waagent[1894]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:24:07.207586 waagent[1894]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:24:07.208975 waagent[1894]: 2025-11-24T00:24:07.208932Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 24 00:24:07.209078 waagent[1894]: 2025-11-24T00:24:07.209052Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 24 00:24:07.209211 waagent[1894]: 2025-11-24T00:24:07.209173Z INFO EnvHandler ExtHandler Gateway:None Nov 24 00:24:07.209272 waagent[1894]: 2025-11-24T00:24:07.209255Z INFO EnvHandler ExtHandler Routes:None Nov 24 00:24:07.209499 waagent[1894]: 2025-11-24T00:24:07.209481Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 24 00:24:07.216254 waagent[1894]: 2025-11-24T00:24:07.216218Z INFO ExtHandler ExtHandler Nov 24 00:24:07.216322 waagent[1894]: 2025-11-24T00:24:07.216286Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 71d5b50c-8ed6-4cc2-9175-2193b7de5869 correlation f567bca7-983b-475d-9dd5-c186301757b3 created: 2025-11-24T00:22:18.395226Z] Nov 24 00:24:07.216602 waagent[1894]: 2025-11-24T00:24:07.216565Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 24 00:24:07.217019 waagent[1894]: 2025-11-24T00:24:07.216998Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 24 00:24:07.270860 waagent[1894]: 2025-11-24T00:24:07.270817Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 24 00:24:07.270860 waagent[1894]: Try `iptables -h' or 'iptables --help' for more information.) Nov 24 00:24:07.271204 waagent[1894]: 2025-11-24T00:24:07.271144Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B3AE11E5-E2B9-4E35-A470-061E48AEE0F3;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 24 00:24:07.382980 waagent[1894]: 2025-11-24T00:24:07.382938Z INFO MonitorHandler ExtHandler Network interfaces: Nov 24 00:24:07.382980 waagent[1894]: Executing ['ip', '-a', '-o', 'link']: Nov 24 00:24:07.382980 waagent[1894]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 24 00:24:07.382980 waagent[1894]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:9c:a2:12 brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 24 00:24:07.382980 waagent[1894]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:9c:a2:12 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 24 00:24:07.382980 waagent[1894]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 24 00:24:07.382980 waagent[1894]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 24 00:24:07.382980 waagent[1894]: 2: eth0 inet 10.200.0.20/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 24 00:24:07.382980 waagent[1894]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 24 00:24:07.382980 waagent[1894]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 24 00:24:07.382980 waagent[1894]: 2: eth0 inet6 fe80::6245:bdff:fe9c:a212/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 24 00:24:07.481437 waagent[1894]: 2025-11-24T00:24:07.481356Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 24 00:24:07.481437 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.481437 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.481437 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.481437 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.481437 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.481437 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.481437 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:24:07.481437 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:24:07.481437 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:24:07.483971 waagent[1894]: 2025-11-24T00:24:07.483926Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 24 00:24:07.483971 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.483971 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.483971 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.483971 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.483971 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:24:07.483971 waagent[1894]: pkts bytes target prot opt in out source destination Nov 24 00:24:07.483971 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:24:07.483971 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:24:07.483971 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:24:14.031995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:24:14.033674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:14.525066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:14.536333 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:14.592216 kubelet[2052]: E1124 00:24:14.592179 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:14.595368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:14.595492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:14.595813 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.1M memory peak. Nov 24 00:24:24.781993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:24:24.783394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:25.226981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:25.229606 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:25.265963 kubelet[2067]: E1124 00:24:25.265929 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:25.267601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:25.267723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:25.268030 systemd[1]: kubelet.service: Consumed 122ms CPU time, 110.6M memory peak. Nov 24 00:24:25.609253 chronyd[1665]: Selected source PHC0 Nov 24 00:24:34.983690 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:24:34.984783 systemd[1]: Started sshd@0-10.200.0.20:22-10.200.16.10:48484.service - OpenSSH per-connection server daemon (10.200.16.10:48484). Nov 24 00:24:35.281868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 24 00:24:35.283122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:35.654089 sshd[2075]: Accepted publickey for core from 10.200.16.10 port 48484 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:35.655010 sshd-session[2075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:35.659213 systemd-logind[1686]: New session 3 of user core. Nov 24 00:24:35.665306 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:24:35.824977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:35.833357 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:35.864611 kubelet[2087]: E1124 00:24:35.864561 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:35.866111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:35.866242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:35.866650 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.3M memory peak. Nov 24 00:24:36.139481 systemd[1]: Started sshd@1-10.200.0.20:22-10.200.16.10:48486.service - OpenSSH per-connection server daemon (10.200.16.10:48486). Nov 24 00:24:36.683997 sshd[2096]: Accepted publickey for core from 10.200.16.10 port 48486 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:36.685028 sshd-session[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:36.688750 systemd-logind[1686]: New session 4 of user core. Nov 24 00:24:36.693272 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:24:37.070705 sshd[2099]: Connection closed by 10.200.16.10 port 48486 Nov 24 00:24:37.071131 sshd-session[2096]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:37.073469 systemd[1]: sshd@1-10.200.0.20:22-10.200.16.10:48486.service: Deactivated successfully. Nov 24 00:24:37.074879 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:24:37.076388 systemd-logind[1686]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:24:37.077053 systemd-logind[1686]: Removed session 4. Nov 24 00:24:37.179480 systemd[1]: Started sshd@2-10.200.0.20:22-10.200.16.10:48490.service - OpenSSH per-connection server daemon (10.200.16.10:48490). Nov 24 00:24:37.728382 sshd[2105]: Accepted publickey for core from 10.200.16.10 port 48490 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:37.729428 sshd-session[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:37.733649 systemd-logind[1686]: New session 5 of user core. Nov 24 00:24:37.739288 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:24:38.120895 sshd[2108]: Connection closed by 10.200.16.10 port 48490 Nov 24 00:24:38.121404 sshd-session[2105]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:38.124479 systemd[1]: sshd@2-10.200.0.20:22-10.200.16.10:48490.service: Deactivated successfully. Nov 24 00:24:38.125960 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:24:38.126676 systemd-logind[1686]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:24:38.127772 systemd-logind[1686]: Removed session 5. Nov 24 00:24:38.229521 systemd[1]: Started sshd@3-10.200.0.20:22-10.200.16.10:48500.service - OpenSSH per-connection server daemon (10.200.16.10:48500). Nov 24 00:24:38.775820 sshd[2114]: Accepted publickey for core from 10.200.16.10 port 48500 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:38.776867 sshd-session[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:38.781007 systemd-logind[1686]: New session 6 of user core. Nov 24 00:24:38.787263 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:24:39.172521 sshd[2117]: Connection closed by 10.200.16.10 port 48500 Nov 24 00:24:39.172947 sshd-session[2114]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:39.175703 systemd[1]: sshd@3-10.200.0.20:22-10.200.16.10:48500.service: Deactivated successfully. Nov 24 00:24:39.177044 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:24:39.177689 systemd-logind[1686]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:24:39.178602 systemd-logind[1686]: Removed session 6. Nov 24 00:24:39.275388 systemd[1]: Started sshd@4-10.200.0.20:22-10.200.16.10:48512.service - OpenSSH per-connection server daemon (10.200.16.10:48512). Nov 24 00:24:39.825681 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 48512 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:39.826732 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:39.830208 systemd-logind[1686]: New session 7 of user core. Nov 24 00:24:39.844300 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:24:40.339527 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:24:40.339759 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:24:40.369858 sudo[2127]: pam_unix(sudo:session): session closed for user root Nov 24 00:24:40.465273 sshd[2126]: Connection closed by 10.200.16.10 port 48512 Nov 24 00:24:40.465829 sshd-session[2123]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:40.468699 systemd[1]: sshd@4-10.200.0.20:22-10.200.16.10:48512.service: Deactivated successfully. Nov 24 00:24:40.470214 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:24:40.471986 systemd-logind[1686]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:24:40.472845 systemd-logind[1686]: Removed session 7. Nov 24 00:24:40.568458 systemd[1]: Started sshd@5-10.200.0.20:22-10.200.16.10:44730.service - OpenSSH per-connection server daemon (10.200.16.10:44730). Nov 24 00:24:41.112581 sshd[2133]: Accepted publickey for core from 10.200.16.10 port 44730 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:41.113648 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:41.117928 systemd-logind[1686]: New session 8 of user core. Nov 24 00:24:41.124295 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:24:41.413625 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:24:41.413815 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:24:41.419644 sudo[2138]: pam_unix(sudo:session): session closed for user root Nov 24 00:24:41.423518 sudo[2137]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:24:41.423732 sudo[2137]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:24:41.430985 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:24:41.458424 augenrules[2160]: No rules Nov 24 00:24:41.459398 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:24:41.459541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:24:41.460712 sudo[2137]: pam_unix(sudo:session): session closed for user root Nov 24 00:24:41.555729 sshd[2136]: Connection closed by 10.200.16.10 port 44730 Nov 24 00:24:41.556073 sshd-session[2133]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:41.558739 systemd[1]: sshd@5-10.200.0.20:22-10.200.16.10:44730.service: Deactivated successfully. Nov 24 00:24:41.560044 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:24:41.560606 systemd-logind[1686]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:24:41.561449 systemd-logind[1686]: Removed session 8. Nov 24 00:24:41.656464 systemd[1]: Started sshd@6-10.200.0.20:22-10.200.16.10:44746.service - OpenSSH per-connection server daemon (10.200.16.10:44746). Nov 24 00:24:42.208794 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 44746 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:24:42.209810 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:42.213743 systemd-logind[1686]: New session 9 of user core. Nov 24 00:24:42.218314 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:24:42.510685 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:24:42.510889 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:24:44.233177 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 24 00:24:44.264055 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:24:44.280412 (dockerd)[2191]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:24:45.750795 dockerd[2191]: time="2025-11-24T00:24:45.750540774Z" level=info msg="Starting up" Nov 24 00:24:45.751732 dockerd[2191]: time="2025-11-24T00:24:45.751691830Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:24:45.760724 dockerd[2191]: time="2025-11-24T00:24:45.760693421Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:24:45.841254 dockerd[2191]: time="2025-11-24T00:24:45.841228823Z" level=info msg="Loading containers: start." Nov 24 00:24:45.870879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 24 00:24:45.873688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:45.893345 kernel: Initializing XFRM netlink socket Nov 24 00:24:46.377762 systemd-networkd[1331]: docker0: Link UP Nov 24 00:24:46.406739 dockerd[2191]: time="2025-11-24T00:24:46.406241964Z" level=info msg="Loading containers: done." Nov 24 00:24:46.413283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:46.418186 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:46.443480 dockerd[2191]: time="2025-11-24T00:24:46.443460764Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:24:46.443653 dockerd[2191]: time="2025-11-24T00:24:46.443641651Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:24:46.443750 dockerd[2191]: time="2025-11-24T00:24:46.443743115Z" level=info msg="Initializing buildkit" Nov 24 00:24:46.454484 kubelet[2373]: E1124 00:24:46.454463 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:46.456020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:46.456139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:46.456429 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.1M memory peak. Nov 24 00:24:46.477800 dockerd[2191]: time="2025-11-24T00:24:46.477772292Z" level=info msg="Completed buildkit initialization" Nov 24 00:24:46.484112 dockerd[2191]: time="2025-11-24T00:24:46.484085154Z" level=info msg="Daemon has completed initialization" Nov 24 00:24:46.484497 dockerd[2191]: time="2025-11-24T00:24:46.484176449Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:24:46.484380 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:24:47.421841 update_engine[1690]: I20251124 00:24:47.421761 1690 update_attempter.cc:509] Updating boot flags... Nov 24 00:24:47.583239 containerd[1712]: time="2025-11-24T00:24:47.582375140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:24:48.255451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482355154.mount: Deactivated successfully. Nov 24 00:24:49.203613 containerd[1712]: time="2025-11-24T00:24:49.203562393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:49.205439 containerd[1712]: time="2025-11-24T00:24:49.205405943Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30112651" Nov 24 00:24:49.207520 containerd[1712]: time="2025-11-24T00:24:49.207480874Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:49.210634 containerd[1712]: time="2025-11-24T00:24:49.210595976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:49.211317 containerd[1712]: time="2025-11-24T00:24:49.211116570Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 1.628699564s" Nov 24 00:24:49.211317 containerd[1712]: time="2025-11-24T00:24:49.211160259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:24:49.211764 containerd[1712]: time="2025-11-24T00:24:49.211704000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:24:50.439202 containerd[1712]: time="2025-11-24T00:24:50.439142981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:50.440893 containerd[1712]: time="2025-11-24T00:24:50.440725739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018039" Nov 24 00:24:50.442796 containerd[1712]: time="2025-11-24T00:24:50.442769620Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:50.445944 containerd[1712]: time="2025-11-24T00:24:50.445922557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:50.446529 containerd[1712]: time="2025-11-24T00:24:50.446509826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.234681093s" Nov 24 00:24:50.446580 containerd[1712]: time="2025-11-24T00:24:50.446535004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:24:50.447128 containerd[1712]: time="2025-11-24T00:24:50.447107955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:24:51.529838 containerd[1712]: time="2025-11-24T00:24:51.529792188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:51.532049 containerd[1712]: time="2025-11-24T00:24:51.532015320Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156414" Nov 24 00:24:51.534243 containerd[1712]: time="2025-11-24T00:24:51.534207075Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:51.537367 containerd[1712]: time="2025-11-24T00:24:51.537327629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:51.538164 containerd[1712]: time="2025-11-24T00:24:51.537891953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.090759925s" Nov 24 00:24:51.538164 containerd[1712]: time="2025-11-24T00:24:51.537943986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:24:51.538439 containerd[1712]: time="2025-11-24T00:24:51.538418975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:24:52.432065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460676301.mount: Deactivated successfully. Nov 24 00:24:52.818412 containerd[1712]: time="2025-11-24T00:24:52.818369242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:52.820084 containerd[1712]: time="2025-11-24T00:24:52.820052716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929032" Nov 24 00:24:52.822242 containerd[1712]: time="2025-11-24T00:24:52.822212610Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:52.825118 containerd[1712]: time="2025-11-24T00:24:52.825082694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:52.825429 containerd[1712]: time="2025-11-24T00:24:52.825377533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 1.286934211s" Nov 24 00:24:52.825429 containerd[1712]: time="2025-11-24T00:24:52.825406488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:24:52.825810 containerd[1712]: time="2025-11-24T00:24:52.825782612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:24:53.332186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254511335.mount: Deactivated successfully. Nov 24 00:24:54.245626 containerd[1712]: time="2025-11-24T00:24:54.245579818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:54.247510 containerd[1712]: time="2025-11-24T00:24:54.247477484Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Nov 24 00:24:54.255892 containerd[1712]: time="2025-11-24T00:24:54.255856114Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:54.263714 containerd[1712]: time="2025-11-24T00:24:54.263668121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:54.264344 containerd[1712]: time="2025-11-24T00:24:54.264238089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.438430334s" Nov 24 00:24:54.264344 containerd[1712]: time="2025-11-24T00:24:54.264264614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:24:54.264753 containerd[1712]: time="2025-11-24T00:24:54.264735002Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:24:54.681017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077281912.mount: Deactivated successfully. Nov 24 00:24:54.692402 containerd[1712]: time="2025-11-24T00:24:54.692366181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:24:54.694277 containerd[1712]: time="2025-11-24T00:24:54.694247471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Nov 24 00:24:54.696269 containerd[1712]: time="2025-11-24T00:24:54.696238378Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:24:54.703177 containerd[1712]: time="2025-11-24T00:24:54.703072981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:24:54.703992 containerd[1712]: time="2025-11-24T00:24:54.703493802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 438.724716ms" Nov 24 00:24:54.703992 containerd[1712]: time="2025-11-24T00:24:54.703519748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:24:54.704366 containerd[1712]: time="2025-11-24T00:24:54.704103485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:24:55.228500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489046353.mount: Deactivated successfully. Nov 24 00:24:56.532007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 24 00:24:56.535274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:56.983145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:56.986125 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:24:57.018142 kubelet[2628]: E1124 00:24:57.018112 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:24:57.019695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:24:57.019819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:24:57.020111 systemd[1]: kubelet.service: Consumed 132ms CPU time, 108.1M memory peak. Nov 24 00:24:57.117275 containerd[1712]: time="2025-11-24T00:24:57.117234595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:57.128171 containerd[1712]: time="2025-11-24T00:24:57.128125275Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58925893" Nov 24 00:24:57.130463 containerd[1712]: time="2025-11-24T00:24:57.130425697Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:57.133833 containerd[1712]: time="2025-11-24T00:24:57.133793029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:57.134647 containerd[1712]: time="2025-11-24T00:24:57.134522520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.430391517s" Nov 24 00:24:57.134647 containerd[1712]: time="2025-11-24T00:24:57.134548828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:24:59.218602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:59.218759 systemd[1]: kubelet.service: Consumed 132ms CPU time, 108.1M memory peak. Nov 24 00:24:59.220974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:24:59.251700 systemd[1]: Reload requested from client PID 2663 ('systemctl') (unit session-9.scope)... Nov 24 00:24:59.251712 systemd[1]: Reloading... Nov 24 00:24:59.340203 zram_generator::config[2710]: No configuration found. Nov 24 00:24:59.570029 systemd[1]: Reloading finished in 318 ms. Nov 24 00:24:59.603399 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:24:59.603473 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:24:59.603703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:24:59.603752 systemd[1]: kubelet.service: Consumed 76ms CPU time, 83.3M memory peak. Nov 24 00:24:59.604987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:25:00.086215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:25:00.089470 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:25:00.124276 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:25:00.124276 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:25:00.124276 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:25:00.124276 kubelet[2777]: I1124 00:25:00.123850 2777 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:25:00.432002 kubelet[2777]: I1124 00:25:00.431907 2777 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:25:00.432002 kubelet[2777]: I1124 00:25:00.431930 2777 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:25:00.432136 kubelet[2777]: I1124 00:25:00.432125 2777 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:25:00.467030 kubelet[2777]: E1124 00:25:00.466735 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:25:00.467333 kubelet[2777]: I1124 00:25:00.467318 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:25:00.474536 kubelet[2777]: I1124 00:25:00.474515 2777 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:25:00.476876 kubelet[2777]: I1124 00:25:00.476860 2777 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:25:00.477046 kubelet[2777]: I1124 00:25:00.477022 2777 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:25:00.477189 kubelet[2777]: I1124 00:25:00.477044 2777 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.2-a-d148bafb83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:25:00.477298 kubelet[2777]: I1124 00:25:00.477197 2777 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:25:00.477298 kubelet[2777]: I1124 00:25:00.477206 2777 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:25:00.478001 kubelet[2777]: I1124 00:25:00.477990 2777 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:25:00.484567 kubelet[2777]: I1124 00:25:00.484551 2777 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:25:00.484630 kubelet[2777]: I1124 00:25:00.484570 2777 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:25:00.484630 kubelet[2777]: I1124 00:25:00.484595 2777 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:25:00.484630 kubelet[2777]: I1124 00:25:00.484608 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:25:00.493340 kubelet[2777]: E1124 00:25:00.492337 2777 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.2-a-d148bafb83&limit=500&resourceVersion=0\": dial tcp 10.200.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:25:00.493340 kubelet[2777]: E1124 00:25:00.492424 2777 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:25:00.494352 kubelet[2777]: I1124 00:25:00.494330 2777 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:25:00.494688 kubelet[2777]: I1124 00:25:00.494667 2777 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:25:00.495873 kubelet[2777]: W1124 00:25:00.495848 2777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:25:00.498442 kubelet[2777]: I1124 00:25:00.498424 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:25:00.498509 kubelet[2777]: I1124 00:25:00.498462 2777 server.go:1289] "Started kubelet" Nov 24 00:25:00.500860 kubelet[2777]: I1124 00:25:00.500822 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:25:00.501983 kubelet[2777]: I1124 00:25:00.501717 2777 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:25:00.504429 kubelet[2777]: I1124 00:25:00.504366 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:25:00.504681 kubelet[2777]: I1124 00:25:00.504658 2777 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:25:00.506228 kubelet[2777]: E1124 00:25:00.504765 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.2-a-d148bafb83.187ac9a1b8ed2e51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.2-a-d148bafb83,UID:ci-4459.1.2-a-d148bafb83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.2-a-d148bafb83,},FirstTimestamp:2025-11-24 00:25:00.498439761 +0000 UTC m=+0.405831137,LastTimestamp:2025-11-24 00:25:00.498439761 +0000 UTC m=+0.405831137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.2-a-d148bafb83,}" Nov 24 00:25:00.507356 kubelet[2777]: I1124 00:25:00.507345 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:25:00.508208 kubelet[2777]: I1124 00:25:00.507744 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:25:00.510268 kubelet[2777]: E1124 00:25:00.510250 2777 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:25:00.510916 kubelet[2777]: E1124 00:25:00.510880 2777 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.2-a-d148bafb83\" not found" Nov 24 00:25:00.510974 kubelet[2777]: I1124 00:25:00.510927 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:25:00.511180 kubelet[2777]: I1124 00:25:00.511169 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:25:00.511224 kubelet[2777]: I1124 00:25:00.511216 2777 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:25:00.512001 kubelet[2777]: I1124 00:25:00.511977 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:25:00.512457 kubelet[2777]: E1124 00:25:00.512429 2777 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:25:00.513420 kubelet[2777]: I1124 00:25:00.513400 2777 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:25:00.513420 kubelet[2777]: I1124 00:25:00.513419 2777 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:25:00.542602 kubelet[2777]: E1124 00:25:00.542580 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.2-a-d148bafb83?timeout=10s\": dial tcp 10.200.0.20:6443: connect: connection refused" interval="200ms" Nov 24 00:25:00.550345 kubelet[2777]: I1124 00:25:00.550319 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:25:00.550345 kubelet[2777]: I1124 00:25:00.550340 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:25:00.550433 kubelet[2777]: I1124 00:25:00.550355 2777 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:25:00.551128 kubelet[2777]: I1124 00:25:00.551097 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:25:00.552522 kubelet[2777]: I1124 00:25:00.552444 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:25:00.552522 kubelet[2777]: I1124 00:25:00.552473 2777 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:25:00.552522 kubelet[2777]: I1124 00:25:00.552492 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:25:00.552522 kubelet[2777]: I1124 00:25:00.552500 2777 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:25:00.556272 kubelet[2777]: E1124 00:25:00.556236 2777 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:25:00.556904 kubelet[2777]: E1124 00:25:00.556590 2777 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:25:00.557278 kubelet[2777]: I1124 00:25:00.557035 2777 policy_none.go:49] "None policy: Start" Nov 24 00:25:00.557278 kubelet[2777]: I1124 00:25:00.557063 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:25:00.557278 kubelet[2777]: I1124 00:25:00.557075 2777 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:25:00.564930 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:25:00.575078 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:25:00.587044 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:25:00.588182 kubelet[2777]: E1124 00:25:00.588114 2777 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:25:00.588305 kubelet[2777]: I1124 00:25:00.588293 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:25:00.588335 kubelet[2777]: I1124 00:25:00.588306 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:25:00.588771 kubelet[2777]: I1124 00:25:00.588753 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:25:00.590255 kubelet[2777]: E1124 00:25:00.590236 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:25:00.590382 kubelet[2777]: E1124 00:25:00.590269 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.2-a-d148bafb83\" not found" Nov 24 00:25:00.667039 systemd[1]: Created slice kubepods-burstable-podd991d023aceebae3a7ea4bc4731a3a50.slice - libcontainer container kubepods-burstable-podd991d023aceebae3a7ea4bc4731a3a50.slice. Nov 24 00:25:00.686185 kubelet[2777]: E1124 00:25:00.686109 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.689333 systemd[1]: Created slice kubepods-burstable-podbddc6067e42eb91f521706f4d3c920a9.slice - libcontainer container kubepods-burstable-podbddc6067e42eb91f521706f4d3c920a9.slice. Nov 24 00:25:00.690441 kubelet[2777]: I1124 00:25:00.690422 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.690721 kubelet[2777]: E1124 00:25:00.690705 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.20:6443/api/v1/nodes\": dial tcp 10.200.0.20:6443: connect: connection refused" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.696985 kubelet[2777]: E1124 00:25:00.696969 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.699663 systemd[1]: Created slice kubepods-burstable-podb070cc0a460380e5816584b0d9ca3131.slice - libcontainer container kubepods-burstable-podb070cc0a460380e5816584b0d9ca3131.slice. Nov 24 00:25:00.700988 kubelet[2777]: E1124 00:25:00.700969 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.743859 kubelet[2777]: E1124 00:25:00.743824 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.2-a-d148bafb83?timeout=10s\": dial tcp 10.200.0.20:6443: connect: connection refused" interval="400ms" Nov 24 00:25:00.813093 kubelet[2777]: I1124 00:25:00.813007 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-k8s-certs\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813093 kubelet[2777]: I1124 00:25:00.813068 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-ca-certs\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813206 kubelet[2777]: I1124 00:25:00.813096 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813206 kubelet[2777]: I1124 00:25:00.813119 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813206 kubelet[2777]: I1124 00:25:00.813142 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813206 kubelet[2777]: I1124 00:25:00.813173 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813206 kubelet[2777]: I1124 00:25:00.813195 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b070cc0a460380e5816584b0d9ca3131-kubeconfig\") pod \"kube-scheduler-ci-4459.1.2-a-d148bafb83\" (UID: \"b070cc0a460380e5816584b0d9ca3131\") " pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813307 kubelet[2777]: I1124 00:25:00.813217 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-ca-certs\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.813307 kubelet[2777]: I1124 00:25:00.813236 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.833673 kubelet[2777]: E1124 00:25:00.833602 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.2-a-d148bafb83.187ac9a1b8ed2e51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.2-a-d148bafb83,UID:ci-4459.1.2-a-d148bafb83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.2-a-d148bafb83,},FirstTimestamp:2025-11-24 00:25:00.498439761 +0000 UTC m=+0.405831137,LastTimestamp:2025-11-24 00:25:00.498439761 +0000 UTC m=+0.405831137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.2-a-d148bafb83,}" Nov 24 00:25:00.892054 kubelet[2777]: I1124 00:25:00.892031 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.892336 kubelet[2777]: E1124 00:25:00.892310 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.20:6443/api/v1/nodes\": dial tcp 10.200.0.20:6443: connect: connection refused" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:00.987855 containerd[1712]: time="2025-11-24T00:25:00.987767059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.2-a-d148bafb83,Uid:d991d023aceebae3a7ea4bc4731a3a50,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:00.998311 containerd[1712]: time="2025-11-24T00:25:00.998283616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.2-a-d148bafb83,Uid:bddc6067e42eb91f521706f4d3c920a9,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:01.002877 containerd[1712]: time="2025-11-24T00:25:01.002853468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.2-a-d148bafb83,Uid:b070cc0a460380e5816584b0d9ca3131,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:01.034072 containerd[1712]: time="2025-11-24T00:25:01.034042480Z" level=info msg="connecting to shim cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa" address="unix:///run/containerd/s/2a9eeb5fea34c5085a16cdfbd703316ba37131c7ed4808d017a527b9b1bceb81" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:01.055307 systemd[1]: Started cri-containerd-cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa.scope - libcontainer container cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa. Nov 24 00:25:01.059291 containerd[1712]: time="2025-11-24T00:25:01.059245023Z" level=info msg="connecting to shim 29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801" address="unix:///run/containerd/s/3cb4041924273d702ed96d14f1755c33aaa2344d868990437c9c047e7a9cfab6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:01.083063 containerd[1712]: time="2025-11-24T00:25:01.083029757Z" level=info msg="connecting to shim 7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43" address="unix:///run/containerd/s/4b31e951e9aeb4282358dc32e24266706d75b502a696ba727feedb2381a5fe85" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:01.098290 systemd[1]: Started cri-containerd-29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801.scope - libcontainer container 29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801. Nov 24 00:25:01.108240 systemd[1]: Started cri-containerd-7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43.scope - libcontainer container 7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43. Nov 24 00:25:01.145270 kubelet[2777]: E1124 00:25:01.145222 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.2-a-d148bafb83?timeout=10s\": dial tcp 10.200.0.20:6443: connect: connection refused" interval="800ms" Nov 24 00:25:01.166401 containerd[1712]: time="2025-11-24T00:25:01.166326669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.2-a-d148bafb83,Uid:d991d023aceebae3a7ea4bc4731a3a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa\"" Nov 24 00:25:01.180165 containerd[1712]: time="2025-11-24T00:25:01.180132914Z" level=info msg="CreateContainer within sandbox \"cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:25:01.183466 containerd[1712]: time="2025-11-24T00:25:01.183444408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.2-a-d148bafb83,Uid:bddc6067e42eb91f521706f4d3c920a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801\"" Nov 24 00:25:01.186220 containerd[1712]: time="2025-11-24T00:25:01.186190534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.2-a-d148bafb83,Uid:b070cc0a460380e5816584b0d9ca3131,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43\"" Nov 24 00:25:01.188814 containerd[1712]: time="2025-11-24T00:25:01.188782481Z" level=info msg="CreateContainer within sandbox \"29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:25:01.193015 containerd[1712]: time="2025-11-24T00:25:01.192627331Z" level=info msg="CreateContainer within sandbox \"7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:25:01.205374 containerd[1712]: time="2025-11-24T00:25:01.205352740Z" level=info msg="Container db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:01.220331 containerd[1712]: time="2025-11-24T00:25:01.220308318Z" level=info msg="Container 9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:01.237767 containerd[1712]: time="2025-11-24T00:25:01.237741242Z" level=info msg="Container 1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:01.247096 containerd[1712]: time="2025-11-24T00:25:01.246712659Z" level=info msg="CreateContainer within sandbox \"29db00d9ddfc36fa60b7ea53c506179b2861854589d0608bb0ef81576a146801\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171\"" Nov 24 00:25:01.247392 containerd[1712]: time="2025-11-24T00:25:01.247371098Z" level=info msg="StartContainer for \"db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171\"" Nov 24 00:25:01.248043 containerd[1712]: time="2025-11-24T00:25:01.248015732Z" level=info msg="connecting to shim db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171" address="unix:///run/containerd/s/3cb4041924273d702ed96d14f1755c33aaa2344d868990437c9c047e7a9cfab6" protocol=ttrpc version=3 Nov 24 00:25:01.257535 containerd[1712]: time="2025-11-24T00:25:01.257467967Z" level=info msg="CreateContainer within sandbox \"7a911455064646309060b10b1fcf136e3e7ddc4645a00cae966cc8747abc7a43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483\"" Nov 24 00:25:01.258607 containerd[1712]: time="2025-11-24T00:25:01.258198880Z" level=info msg="StartContainer for \"1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483\"" Nov 24 00:25:01.260400 containerd[1712]: time="2025-11-24T00:25:01.260376509Z" level=info msg="CreateContainer within sandbox \"cdb5f1fe97fb51d9261a2604f014e5fcd254a6a0009bd1bbe010c7edbd5c28aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f\"" Nov 24 00:25:01.260692 containerd[1712]: time="2025-11-24T00:25:01.260670440Z" level=info msg="connecting to shim 1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483" address="unix:///run/containerd/s/4b31e951e9aeb4282358dc32e24266706d75b502a696ba727feedb2381a5fe85" protocol=ttrpc version=3 Nov 24 00:25:01.263999 containerd[1712]: time="2025-11-24T00:25:01.261776563Z" level=info msg="StartContainer for \"9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f\"" Nov 24 00:25:01.263999 containerd[1712]: time="2025-11-24T00:25:01.262610020Z" level=info msg="connecting to shim 9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f" address="unix:///run/containerd/s/2a9eeb5fea34c5085a16cdfbd703316ba37131c7ed4808d017a527b9b1bceb81" protocol=ttrpc version=3 Nov 24 00:25:01.263302 systemd[1]: Started cri-containerd-db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171.scope - libcontainer container db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171. Nov 24 00:25:01.292293 systemd[1]: Started cri-containerd-1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483.scope - libcontainer container 1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483. Nov 24 00:25:01.295622 kubelet[2777]: I1124 00:25:01.295583 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:01.296698 kubelet[2777]: E1124 00:25:01.296674 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.20:6443/api/v1/nodes\": dial tcp 10.200.0.20:6443: connect: connection refused" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:01.301481 systemd[1]: Started cri-containerd-9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f.scope - libcontainer container 9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f. Nov 24 00:25:01.346355 containerd[1712]: time="2025-11-24T00:25:01.346275518Z" level=info msg="StartContainer for \"db252bf4bfc4f8cdd91c96f1c66bc1c0cd681092ef65a53a082a1f77a874d171\" returns successfully" Nov 24 00:25:01.390167 containerd[1712]: time="2025-11-24T00:25:01.390073637Z" level=info msg="StartContainer for \"9ee758e99c9c340c9dd708669ddcaac97052bd80d8b96d9e82bd40d5ffe38c6f\" returns successfully" Nov 24 00:25:01.396376 containerd[1712]: time="2025-11-24T00:25:01.396340166Z" level=info msg="StartContainer for \"1439b9dc89248b01f897b3b77e6bd01134eb350e17b1ea4d59ee7058e3312483\" returns successfully" Nov 24 00:25:01.566840 kubelet[2777]: E1124 00:25:01.566820 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:01.567653 kubelet[2777]: E1124 00:25:01.567488 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:01.569907 kubelet[2777]: E1124 00:25:01.569893 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:02.100622 kubelet[2777]: I1124 00:25:02.100371 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:02.573477 kubelet[2777]: E1124 00:25:02.573046 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:02.573477 kubelet[2777]: E1124 00:25:02.573373 2777 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.059857 kubelet[2777]: E1124 00:25:03.059820 2777 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.2-a-d148bafb83\" not found" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.105169 kubelet[2777]: I1124 00:25:03.104580 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.105169 kubelet[2777]: E1124 00:25:03.104607 2777 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.2-a-d148bafb83\": node \"ci-4459.1.2-a-d148bafb83\" not found" Nov 24 00:25:03.117447 kubelet[2777]: I1124 00:25:03.117420 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.181930 kubelet[2777]: E1124 00:25:03.181903 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.181930 kubelet[2777]: I1124 00:25:03.181929 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.184661 kubelet[2777]: E1124 00:25:03.184638 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.2-a-d148bafb83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.184661 kubelet[2777]: I1124 00:25:03.184661 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.185809 kubelet[2777]: E1124 00:25:03.185787 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:03.491352 kubelet[2777]: I1124 00:25:03.491258 2777 apiserver.go:52] "Watching apiserver" Nov 24 00:25:03.512009 kubelet[2777]: I1124 00:25:03.511978 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:25:05.060825 systemd[1]: Reload requested from client PID 3062 ('systemctl') (unit session-9.scope)... Nov 24 00:25:05.060838 systemd[1]: Reloading... Nov 24 00:25:05.158183 zram_generator::config[3105]: No configuration found. Nov 24 00:25:05.359645 systemd[1]: Reloading finished in 298 ms. Nov 24 00:25:05.383473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:25:05.404034 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:25:05.404329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:25:05.404377 systemd[1]: kubelet.service: Consumed 675ms CPU time, 129.2M memory peak. Nov 24 00:25:05.405875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:25:05.781437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:25:05.791397 (kubelet)[3176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:25:05.831166 kubelet[3176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:25:05.831166 kubelet[3176]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:25:05.831166 kubelet[3176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:25:05.831166 kubelet[3176]: I1124 00:25:05.830613 3176 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:25:05.837350 kubelet[3176]: I1124 00:25:05.837329 3176 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:25:05.837350 kubelet[3176]: I1124 00:25:05.837347 3176 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:25:05.837543 kubelet[3176]: I1124 00:25:05.837530 3176 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:25:05.838365 kubelet[3176]: I1124 00:25:05.838348 3176 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:25:05.840319 kubelet[3176]: I1124 00:25:05.839855 3176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:25:05.844166 kubelet[3176]: I1124 00:25:05.844136 3176 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:25:05.847693 kubelet[3176]: I1124 00:25:05.847670 3176 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:25:05.847874 kubelet[3176]: I1124 00:25:05.847846 3176 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:25:05.848104 kubelet[3176]: I1124 00:25:05.847876 3176 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.2-a-d148bafb83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:25:05.848230 kubelet[3176]: I1124 00:25:05.848106 3176 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:25:05.848230 kubelet[3176]: I1124 00:25:05.848115 3176 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:25:05.849287 kubelet[3176]: I1124 00:25:05.849216 3176 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:25:05.849432 kubelet[3176]: I1124 00:25:05.849365 3176 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:25:05.849432 kubelet[3176]: I1124 00:25:05.849381 3176 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:25:05.849432 kubelet[3176]: I1124 00:25:05.849402 3176 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:25:05.849432 kubelet[3176]: I1124 00:25:05.849414 3176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:25:05.856040 kubelet[3176]: I1124 00:25:05.855976 3176 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:25:05.856462 kubelet[3176]: I1124 00:25:05.856450 3176 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:25:05.861334 kubelet[3176]: I1124 00:25:05.861319 3176 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:25:05.861391 kubelet[3176]: I1124 00:25:05.861355 3176 server.go:1289] "Started kubelet" Nov 24 00:25:05.862552 kubelet[3176]: I1124 00:25:05.862534 3176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:25:05.865382 kubelet[3176]: I1124 00:25:05.865346 3176 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:25:05.866974 kubelet[3176]: I1124 00:25:05.866398 3176 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:25:05.869355 kubelet[3176]: I1124 00:25:05.869307 3176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:25:05.869485 kubelet[3176]: I1124 00:25:05.869472 3176 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:25:05.869660 kubelet[3176]: I1124 00:25:05.869637 3176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:25:05.871831 kubelet[3176]: I1124 00:25:05.871578 3176 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:25:05.871831 kubelet[3176]: I1124 00:25:05.871644 3176 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:25:05.871831 kubelet[3176]: I1124 00:25:05.871733 3176 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:25:05.873924 kubelet[3176]: I1124 00:25:05.873652 3176 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:25:05.873924 kubelet[3176]: I1124 00:25:05.873732 3176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:25:05.875237 kubelet[3176]: E1124 00:25:05.875195 3176 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:25:05.875815 kubelet[3176]: I1124 00:25:05.875798 3176 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:25:05.878576 kubelet[3176]: I1124 00:25:05.878546 3176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:25:05.879507 kubelet[3176]: I1124 00:25:05.879492 3176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:25:05.879580 kubelet[3176]: I1124 00:25:05.879574 3176 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:25:05.879622 kubelet[3176]: I1124 00:25:05.879617 3176 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:25:05.879654 kubelet[3176]: I1124 00:25:05.879650 3176 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:25:05.879712 kubelet[3176]: E1124 00:25:05.879702 3176 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:25:05.926852 kubelet[3176]: I1124 00:25:05.926837 3176 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:25:05.926852 kubelet[3176]: I1124 00:25:05.926847 3176 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:25:05.926953 kubelet[3176]: I1124 00:25:05.926860 3176 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:25:05.926975 kubelet[3176]: I1124 00:25:05.926955 3176 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:25:05.926975 kubelet[3176]: I1124 00:25:05.926962 3176 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:25:05.927017 kubelet[3176]: I1124 00:25:05.926976 3176 policy_none.go:49] "None policy: Start" Nov 24 00:25:05.927017 kubelet[3176]: I1124 00:25:05.926984 3176 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:25:05.927017 kubelet[3176]: I1124 00:25:05.926991 3176 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:25:05.927225 kubelet[3176]: I1124 00:25:05.927212 3176 state_mem.go:75] "Updated machine memory state" Nov 24 00:25:05.931433 kubelet[3176]: E1124 00:25:05.930624 3176 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:25:05.931433 kubelet[3176]: I1124 00:25:05.930721 3176 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:25:05.931433 kubelet[3176]: I1124 00:25:05.930728 3176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:25:05.931433 kubelet[3176]: I1124 00:25:05.931017 3176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:25:05.934750 kubelet[3176]: E1124 00:25:05.934696 3176 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:25:05.981041 kubelet[3176]: I1124 00:25:05.981022 3176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:05.981336 kubelet[3176]: I1124 00:25:05.981320 3176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:05.982684 kubelet[3176]: I1124 00:25:05.981483 3176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.027899 kubelet[3176]: I1124 00:25:06.027867 3176 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:25:06.028381 kubelet[3176]: I1124 00:25:06.028094 3176 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:25:06.028614 kubelet[3176]: I1124 00:25:06.028106 3176 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:25:06.039690 kubelet[3176]: I1124 00:25:06.039635 3176 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.053746 kubelet[3176]: I1124 00:25:06.053722 3176 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.053810 kubelet[3176]: I1124 00:25:06.053770 3176 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173079 kubelet[3176]: I1124 00:25:06.172862 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173079 kubelet[3176]: I1124 00:25:06.172890 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-ca-certs\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173079 kubelet[3176]: I1124 00:25:06.172909 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173079 kubelet[3176]: I1124 00:25:06.172928 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173079 kubelet[3176]: I1124 00:25:06.172948 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173276 kubelet[3176]: I1124 00:25:06.172967 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b070cc0a460380e5816584b0d9ca3131-kubeconfig\") pod \"kube-scheduler-ci-4459.1.2-a-d148bafb83\" (UID: \"b070cc0a460380e5816584b0d9ca3131\") " pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173276 kubelet[3176]: I1124 00:25:06.172985 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-ca-certs\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173276 kubelet[3176]: I1124 00:25:06.173002 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bddc6067e42eb91f521706f4d3c920a9-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.2-a-d148bafb83\" (UID: \"bddc6067e42eb91f521706f4d3c920a9\") " pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.173276 kubelet[3176]: I1124 00:25:06.173019 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d991d023aceebae3a7ea4bc4731a3a50-k8s-certs\") pod \"kube-apiserver-ci-4459.1.2-a-d148bafb83\" (UID: \"d991d023aceebae3a7ea4bc4731a3a50\") " pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" Nov 24 00:25:06.852029 kubelet[3176]: I1124 00:25:06.851974 3176 apiserver.go:52] "Watching apiserver" Nov 24 00:25:06.871776 kubelet[3176]: I1124 00:25:06.871748 3176 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:25:06.939114 kubelet[3176]: I1124 00:25:06.939072 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.2-a-d148bafb83" podStartSLOduration=1.939056575 podStartE2EDuration="1.939056575s" podCreationTimestamp="2025-11-24 00:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:06.930144089 +0000 UTC m=+1.135311142" watchObservedRunningTime="2025-11-24 00:25:06.939056575 +0000 UTC m=+1.144223619" Nov 24 00:25:06.950161 kubelet[3176]: I1124 00:25:06.950109 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.2-a-d148bafb83" podStartSLOduration=1.950098869 podStartE2EDuration="1.950098869s" podCreationTimestamp="2025-11-24 00:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:06.939468327 +0000 UTC m=+1.144635379" watchObservedRunningTime="2025-11-24 00:25:06.950098869 +0000 UTC m=+1.155265921" Nov 24 00:25:09.941671 kubelet[3176]: I1124 00:25:09.941633 3176 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:25:09.941986 containerd[1712]: time="2025-11-24T00:25:09.941896040Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:25:09.942176 kubelet[3176]: I1124 00:25:09.942013 3176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:25:10.365217 kubelet[3176]: I1124 00:25:10.364577 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.2-a-d148bafb83" podStartSLOduration=5.36455828 podStartE2EDuration="5.36455828s" podCreationTimestamp="2025-11-24 00:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:06.950092077 +0000 UTC m=+1.155259128" watchObservedRunningTime="2025-11-24 00:25:10.36455828 +0000 UTC m=+4.569725338" Nov 24 00:25:10.375139 systemd[1]: Created slice kubepods-besteffort-pod2bd0fe5b_f6a3_400a_a440_d6890b514e84.slice - libcontainer container kubepods-besteffort-pod2bd0fe5b_f6a3_400a_a440_d6890b514e84.slice. Nov 24 00:25:10.401985 kubelet[3176]: I1124 00:25:10.401957 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bd0fe5b-f6a3-400a-a440-d6890b514e84-xtables-lock\") pod \"kube-proxy-5ctw2\" (UID: \"2bd0fe5b-f6a3-400a-a440-d6890b514e84\") " pod="kube-system/kube-proxy-5ctw2" Nov 24 00:25:10.402496 kubelet[3176]: I1124 00:25:10.401992 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdrq\" (UniqueName: \"kubernetes.io/projected/2bd0fe5b-f6a3-400a-a440-d6890b514e84-kube-api-access-9kdrq\") pod \"kube-proxy-5ctw2\" (UID: \"2bd0fe5b-f6a3-400a-a440-d6890b514e84\") " pod="kube-system/kube-proxy-5ctw2" Nov 24 00:25:10.402496 kubelet[3176]: I1124 00:25:10.402103 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bd0fe5b-f6a3-400a-a440-d6890b514e84-kube-proxy\") pod \"kube-proxy-5ctw2\" (UID: \"2bd0fe5b-f6a3-400a-a440-d6890b514e84\") " pod="kube-system/kube-proxy-5ctw2" Nov 24 00:25:10.402496 kubelet[3176]: I1124 00:25:10.402122 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bd0fe5b-f6a3-400a-a440-d6890b514e84-lib-modules\") pod \"kube-proxy-5ctw2\" (UID: \"2bd0fe5b-f6a3-400a-a440-d6890b514e84\") " pod="kube-system/kube-proxy-5ctw2" Nov 24 00:25:10.506315 kubelet[3176]: E1124 00:25:10.506284 3176 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:25:10.506315 kubelet[3176]: E1124 00:25:10.506307 3176 projected.go:194] Error preparing data for projected volume kube-api-access-9kdrq for pod kube-system/kube-proxy-5ctw2: configmap "kube-root-ca.crt" not found Nov 24 00:25:10.506420 kubelet[3176]: E1124 00:25:10.506367 3176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bd0fe5b-f6a3-400a-a440-d6890b514e84-kube-api-access-9kdrq podName:2bd0fe5b-f6a3-400a-a440-d6890b514e84 nodeName:}" failed. No retries permitted until 2025-11-24 00:25:11.006346424 +0000 UTC m=+5.211513464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9kdrq" (UniqueName: "kubernetes.io/projected/2bd0fe5b-f6a3-400a-a440-d6890b514e84-kube-api-access-9kdrq") pod "kube-proxy-5ctw2" (UID: "2bd0fe5b-f6a3-400a-a440-d6890b514e84") : configmap "kube-root-ca.crt" not found Nov 24 00:25:11.135790 systemd[1]: Created slice kubepods-besteffort-pod383da9bd_6d33_48f3_8f0b_f9b80446c3ce.slice - libcontainer container kubepods-besteffort-pod383da9bd_6d33_48f3_8f0b_f9b80446c3ce.slice. Nov 24 00:25:11.208382 kubelet[3176]: I1124 00:25:11.208347 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4nv\" (UniqueName: \"kubernetes.io/projected/383da9bd-6d33-48f3-8f0b-f9b80446c3ce-kube-api-access-hc4nv\") pod \"tigera-operator-7dcd859c48-qwx9q\" (UID: \"383da9bd-6d33-48f3-8f0b-f9b80446c3ce\") " pod="tigera-operator/tigera-operator-7dcd859c48-qwx9q" Nov 24 00:25:11.208673 kubelet[3176]: I1124 00:25:11.208412 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/383da9bd-6d33-48f3-8f0b-f9b80446c3ce-var-lib-calico\") pod \"tigera-operator-7dcd859c48-qwx9q\" (UID: \"383da9bd-6d33-48f3-8f0b-f9b80446c3ce\") " pod="tigera-operator/tigera-operator-7dcd859c48-qwx9q" Nov 24 00:25:11.283833 containerd[1712]: time="2025-11-24T00:25:11.283784548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ctw2,Uid:2bd0fe5b-f6a3-400a-a440-d6890b514e84,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:11.315490 containerd[1712]: time="2025-11-24T00:25:11.315428851Z" level=info msg="connecting to shim 90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f" address="unix:///run/containerd/s/65d9f21ae80485a0c85c42008f5d05f54a509af9cf99c5989d407fa8af493bd8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:11.349274 systemd[1]: Started cri-containerd-90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f.scope - libcontainer container 90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f. Nov 24 00:25:11.369733 containerd[1712]: time="2025-11-24T00:25:11.369654135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ctw2,Uid:2bd0fe5b-f6a3-400a-a440-d6890b514e84,Namespace:kube-system,Attempt:0,} returns sandbox id \"90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f\"" Nov 24 00:25:11.376729 containerd[1712]: time="2025-11-24T00:25:11.376706292Z" level=info msg="CreateContainer within sandbox \"90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:25:11.392626 containerd[1712]: time="2025-11-24T00:25:11.389946915Z" level=info msg="Container ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:11.403486 containerd[1712]: time="2025-11-24T00:25:11.403463120Z" level=info msg="CreateContainer within sandbox \"90d396619632bf9bf83fe2e508ed20a397b344dcfe4fcf02028db66d03606d3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a\"" Nov 24 00:25:11.404494 containerd[1712]: time="2025-11-24T00:25:11.403806662Z" level=info msg="StartContainer for \"ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a\"" Nov 24 00:25:11.404966 containerd[1712]: time="2025-11-24T00:25:11.404940337Z" level=info msg="connecting to shim ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a" address="unix:///run/containerd/s/65d9f21ae80485a0c85c42008f5d05f54a509af9cf99c5989d407fa8af493bd8" protocol=ttrpc version=3 Nov 24 00:25:11.422285 systemd[1]: Started cri-containerd-ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a.scope - libcontainer container ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a. Nov 24 00:25:11.438508 containerd[1712]: time="2025-11-24T00:25:11.438469686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qwx9q,Uid:383da9bd-6d33-48f3-8f0b-f9b80446c3ce,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:25:11.464556 containerd[1712]: time="2025-11-24T00:25:11.464509793Z" level=info msg="connecting to shim 6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947" address="unix:///run/containerd/s/8ffacd5e1382499e74f898f0f6b7d57d28aac59b89896f41aa2d70f3e13d89ed" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:11.487389 containerd[1712]: time="2025-11-24T00:25:11.487307824Z" level=info msg="StartContainer for \"ff299571b63707bae3f04b9298b9ca5c6de29b00afb93d22d4c41d9a69c2b68a\" returns successfully" Nov 24 00:25:11.495616 systemd[1]: Started cri-containerd-6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947.scope - libcontainer container 6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947. Nov 24 00:25:11.543510 containerd[1712]: time="2025-11-24T00:25:11.543486229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qwx9q,Uid:383da9bd-6d33-48f3-8f0b-f9b80446c3ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947\"" Nov 24 00:25:11.544713 containerd[1712]: time="2025-11-24T00:25:11.544697574Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:25:11.931963 kubelet[3176]: I1124 00:25:11.931583 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5ctw2" podStartSLOduration=1.931566883 podStartE2EDuration="1.931566883s" podCreationTimestamp="2025-11-24 00:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:11.93116621 +0000 UTC m=+6.136333268" watchObservedRunningTime="2025-11-24 00:25:11.931566883 +0000 UTC m=+6.136733937" Nov 24 00:25:13.143847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499020684.mount: Deactivated successfully. Nov 24 00:25:13.523242 containerd[1712]: time="2025-11-24T00:25:13.523044592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:13.524906 containerd[1712]: time="2025-11-24T00:25:13.524816209Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:25:13.527064 containerd[1712]: time="2025-11-24T00:25:13.527040578Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:13.530227 containerd[1712]: time="2025-11-24T00:25:13.530170531Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:13.530655 containerd[1712]: time="2025-11-24T00:25:13.530636237Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.985845066s" Nov 24 00:25:13.530718 containerd[1712]: time="2025-11-24T00:25:13.530706566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:25:13.535847 containerd[1712]: time="2025-11-24T00:25:13.535820124Z" level=info msg="CreateContainer within sandbox \"6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:25:13.547331 containerd[1712]: time="2025-11-24T00:25:13.545499398Z" level=info msg="Container bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:13.558434 containerd[1712]: time="2025-11-24T00:25:13.558411861Z" level=info msg="CreateContainer within sandbox \"6abb7bc73799510e259bffddf46afb28e1dcb664ca94c14b758423942237c947\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29\"" Nov 24 00:25:13.558891 containerd[1712]: time="2025-11-24T00:25:13.558766468Z" level=info msg="StartContainer for \"bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29\"" Nov 24 00:25:13.559699 containerd[1712]: time="2025-11-24T00:25:13.559651891Z" level=info msg="connecting to shim bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29" address="unix:///run/containerd/s/8ffacd5e1382499e74f898f0f6b7d57d28aac59b89896f41aa2d70f3e13d89ed" protocol=ttrpc version=3 Nov 24 00:25:13.580297 systemd[1]: Started cri-containerd-bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29.scope - libcontainer container bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29. Nov 24 00:25:13.604293 containerd[1712]: time="2025-11-24T00:25:13.604252115Z" level=info msg="StartContainer for \"bad9b798c3952c67f08ddbfb319c8d35c2ce163684a3d59df936e41d9088af29\" returns successfully" Nov 24 00:25:13.940687 kubelet[3176]: I1124 00:25:13.940630 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-qwx9q" podStartSLOduration=0.953673704 podStartE2EDuration="2.940615349s" podCreationTimestamp="2025-11-24 00:25:11 +0000 UTC" firstStartedPulling="2025-11-24 00:25:11.544252688 +0000 UTC m=+5.749419741" lastFinishedPulling="2025-11-24 00:25:13.531194346 +0000 UTC m=+7.736361386" observedRunningTime="2025-11-24 00:25:13.940534967 +0000 UTC m=+8.145702012" watchObservedRunningTime="2025-11-24 00:25:13.940615349 +0000 UTC m=+8.145782398" Nov 24 00:25:19.081240 sudo[2173]: pam_unix(sudo:session): session closed for user root Nov 24 00:25:19.167858 sshd[2172]: Connection closed by 10.200.16.10 port 44746 Nov 24 00:25:19.169311 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:19.174481 systemd-logind[1686]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:25:19.175521 systemd[1]: sshd@6-10.200.0.20:22-10.200.16.10:44746.service: Deactivated successfully. Nov 24 00:25:19.178986 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:25:19.179475 systemd[1]: session-9.scope: Consumed 3.199s CPU time, 230.5M memory peak. Nov 24 00:25:19.185616 systemd-logind[1686]: Removed session 9. Nov 24 00:25:24.902904 systemd[1]: Created slice kubepods-besteffort-podf29da61b_56da_4c6b_98e1_46f8e25fdf67.slice - libcontainer container kubepods-besteffort-podf29da61b_56da_4c6b_98e1_46f8e25fdf67.slice. Nov 24 00:25:24.997540 kubelet[3176]: I1124 00:25:24.997506 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gmzc\" (UniqueName: \"kubernetes.io/projected/f29da61b-56da-4c6b-98e1-46f8e25fdf67-kube-api-access-5gmzc\") pod \"calico-typha-575c5d7d74-68z8t\" (UID: \"f29da61b-56da-4c6b-98e1-46f8e25fdf67\") " pod="calico-system/calico-typha-575c5d7d74-68z8t" Nov 24 00:25:24.997825 kubelet[3176]: I1124 00:25:24.997548 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f29da61b-56da-4c6b-98e1-46f8e25fdf67-tigera-ca-bundle\") pod \"calico-typha-575c5d7d74-68z8t\" (UID: \"f29da61b-56da-4c6b-98e1-46f8e25fdf67\") " pod="calico-system/calico-typha-575c5d7d74-68z8t" Nov 24 00:25:24.997825 kubelet[3176]: I1124 00:25:24.997564 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f29da61b-56da-4c6b-98e1-46f8e25fdf67-typha-certs\") pod \"calico-typha-575c5d7d74-68z8t\" (UID: \"f29da61b-56da-4c6b-98e1-46f8e25fdf67\") " pod="calico-system/calico-typha-575c5d7d74-68z8t" Nov 24 00:25:25.116538 systemd[1]: Created slice kubepods-besteffort-podf8ecae07_143b_451e_9ab6_ad378af8d675.slice - libcontainer container kubepods-besteffort-podf8ecae07_143b_451e_9ab6_ad378af8d675.slice. Nov 24 00:25:25.199389 kubelet[3176]: I1124 00:25:25.198862 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-var-lib-calico\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199389 kubelet[3176]: I1124 00:25:25.198893 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-lib-modules\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199389 kubelet[3176]: I1124 00:25:25.198915 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-flexvol-driver-host\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199389 kubelet[3176]: I1124 00:25:25.198932 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f8ecae07-143b-451e-9ab6-ad378af8d675-node-certs\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199389 kubelet[3176]: I1124 00:25:25.198952 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-policysync\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199585 kubelet[3176]: I1124 00:25:25.198969 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-xtables-lock\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199585 kubelet[3176]: I1124 00:25:25.198991 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-cni-net-dir\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199585 kubelet[3176]: I1124 00:25:25.199010 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgfg\" (UniqueName: \"kubernetes.io/projected/f8ecae07-143b-451e-9ab6-ad378af8d675-kube-api-access-xzgfg\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199585 kubelet[3176]: I1124 00:25:25.199027 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-cni-log-dir\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199585 kubelet[3176]: I1124 00:25:25.199045 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ecae07-143b-451e-9ab6-ad378af8d675-tigera-ca-bundle\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199701 kubelet[3176]: I1124 00:25:25.199068 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-var-run-calico\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.199701 kubelet[3176]: I1124 00:25:25.199087 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f8ecae07-143b-451e-9ab6-ad378af8d675-cni-bin-dir\") pod \"calico-node-5wqr2\" (UID: \"f8ecae07-143b-451e-9ab6-ad378af8d675\") " pod="calico-system/calico-node-5wqr2" Nov 24 00:25:25.207508 containerd[1712]: time="2025-11-24T00:25:25.207477080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c5d7d74-68z8t,Uid:f29da61b-56da-4c6b-98e1-46f8e25fdf67,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:25.242504 containerd[1712]: time="2025-11-24T00:25:25.242446423Z" level=info msg="connecting to shim 9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf" address="unix:///run/containerd/s/656272cf2d6bfa77fa62e2cdeb54d4527006aa08dd6ee8920f5864fac94dc326" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:25.272312 systemd[1]: Started cri-containerd-9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf.scope - libcontainer container 9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf. Nov 24 00:25:25.302478 kubelet[3176]: E1124 00:25:25.302288 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.302478 kubelet[3176]: W1124 00:25:25.302305 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.302478 kubelet[3176]: E1124 00:25:25.302344 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.302478 kubelet[3176]: E1124 00:25:25.302461 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.302478 kubelet[3176]: W1124 00:25:25.302466 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.302478 kubelet[3176]: E1124 00:25:25.302473 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.302695 kubelet[3176]: E1124 00:25:25.302595 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.302695 kubelet[3176]: W1124 00:25:25.302600 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.302695 kubelet[3176]: E1124 00:25:25.302606 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.302962 kubelet[3176]: E1124 00:25:25.302951 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.302962 kubelet[3176]: W1124 00:25:25.302962 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.303562 kubelet[3176]: E1124 00:25:25.303260 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.303562 kubelet[3176]: E1124 00:25:25.303531 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.303562 kubelet[3176]: W1124 00:25:25.303545 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.303701 kubelet[3176]: E1124 00:25:25.303555 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.304299 kubelet[3176]: E1124 00:25:25.304284 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.304299 kubelet[3176]: W1124 00:25:25.304298 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.304383 kubelet[3176]: E1124 00:25:25.304310 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.305143 kubelet[3176]: E1124 00:25:25.305052 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.305143 kubelet[3176]: W1124 00:25:25.305064 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.305143 kubelet[3176]: E1124 00:25:25.305075 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.305747 kubelet[3176]: E1124 00:25:25.305636 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.305747 kubelet[3176]: W1124 00:25:25.305649 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.305747 kubelet[3176]: E1124 00:25:25.305660 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.305938 kubelet[3176]: E1124 00:25:25.305909 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.305938 kubelet[3176]: W1124 00:25:25.305918 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.305938 kubelet[3176]: E1124 00:25:25.305928 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.306222 kubelet[3176]: E1124 00:25:25.306194 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.306222 kubelet[3176]: W1124 00:25:25.306204 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.306222 kubelet[3176]: E1124 00:25:25.306213 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.306513 kubelet[3176]: E1124 00:25:25.306488 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.306513 kubelet[3176]: W1124 00:25:25.306496 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.306513 kubelet[3176]: E1124 00:25:25.306505 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.306748 kubelet[3176]: E1124 00:25:25.306725 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.306748 kubelet[3176]: W1124 00:25:25.306732 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.306748 kubelet[3176]: E1124 00:25:25.306740 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.307022 kubelet[3176]: E1124 00:25:25.306998 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.307022 kubelet[3176]: W1124 00:25:25.307008 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.307022 kubelet[3176]: E1124 00:25:25.307019 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.307183 kubelet[3176]: E1124 00:25:25.307143 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.307215 kubelet[3176]: W1124 00:25:25.307205 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.307241 kubelet[3176]: E1124 00:25:25.307213 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.307365 kubelet[3176]: E1124 00:25:25.307356 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.307365 kubelet[3176]: W1124 00:25:25.307363 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.307434 kubelet[3176]: E1124 00:25:25.307370 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.307525 kubelet[3176]: E1124 00:25:25.307517 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.307550 kubelet[3176]: W1124 00:25:25.307525 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.307550 kubelet[3176]: E1124 00:25:25.307532 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.307694 kubelet[3176]: E1124 00:25:25.307655 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.307694 kubelet[3176]: W1124 00:25:25.307663 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.307694 kubelet[3176]: E1124 00:25:25.307671 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.312641 kubelet[3176]: E1124 00:25:25.312353 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.312641 kubelet[3176]: W1124 00:25:25.312365 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.312641 kubelet[3176]: E1124 00:25:25.312376 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.321770 kubelet[3176]: E1124 00:25:25.321736 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:25.329214 kubelet[3176]: E1124 00:25:25.329197 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.329214 kubelet[3176]: W1124 00:25:25.329214 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.329309 kubelet[3176]: E1124 00:25:25.329226 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.356053 containerd[1712]: time="2025-11-24T00:25:25.355821226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c5d7d74-68z8t,Uid:f29da61b-56da-4c6b-98e1-46f8e25fdf67,Namespace:calico-system,Attempt:0,} returns sandbox id \"9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf\"" Nov 24 00:25:25.358484 containerd[1712]: time="2025-11-24T00:25:25.358465221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:25:25.388462 kubelet[3176]: E1124 00:25:25.388450 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.388534 kubelet[3176]: W1124 00:25:25.388527 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.388586 kubelet[3176]: E1124 00:25:25.388561 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.388715 kubelet[3176]: E1124 00:25:25.388687 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.388715 kubelet[3176]: W1124 00:25:25.388693 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.388715 kubelet[3176]: E1124 00:25:25.388698 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.388878 kubelet[3176]: E1124 00:25:25.388849 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.388878 kubelet[3176]: W1124 00:25:25.388853 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.388878 kubelet[3176]: E1124 00:25:25.388859 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.389060 kubelet[3176]: E1124 00:25:25.389026 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.389060 kubelet[3176]: W1124 00:25:25.389031 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.389060 kubelet[3176]: E1124 00:25:25.389035 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.389244 kubelet[3176]: E1124 00:25:25.389214 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.389244 kubelet[3176]: W1124 00:25:25.389220 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.389244 kubelet[3176]: E1124 00:25:25.389226 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.389398 kubelet[3176]: E1124 00:25:25.389373 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.389398 kubelet[3176]: W1124 00:25:25.389377 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.389398 kubelet[3176]: E1124 00:25:25.389382 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.389544 kubelet[3176]: E1124 00:25:25.389519 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.389544 kubelet[3176]: W1124 00:25:25.389524 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.389544 kubelet[3176]: E1124 00:25:25.389528 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.389792 kubelet[3176]: E1124 00:25:25.389680 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.389792 kubelet[3176]: W1124 00:25:25.389741 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.389792 kubelet[3176]: E1124 00:25:25.389750 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.390249 kubelet[3176]: E1124 00:25:25.390131 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.390249 kubelet[3176]: W1124 00:25:25.390143 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.390524 kubelet[3176]: E1124 00:25:25.390431 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.390828 kubelet[3176]: E1124 00:25:25.390691 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.390828 kubelet[3176]: W1124 00:25:25.390702 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.390828 kubelet[3176]: E1124 00:25:25.390714 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.391227 kubelet[3176]: E1124 00:25:25.391082 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.391227 kubelet[3176]: W1124 00:25:25.391175 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.391227 kubelet[3176]: E1124 00:25:25.391189 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.391638 kubelet[3176]: E1124 00:25:25.391583 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.391916 kubelet[3176]: W1124 00:25:25.391746 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.391916 kubelet[3176]: E1124 00:25:25.391763 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.392327 kubelet[3176]: E1124 00:25:25.392220 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.392327 kubelet[3176]: W1124 00:25:25.392233 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.392327 kubelet[3176]: E1124 00:25:25.392245 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.392699 kubelet[3176]: E1124 00:25:25.392626 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.392699 kubelet[3176]: W1124 00:25:25.392638 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.392699 kubelet[3176]: E1124 00:25:25.392650 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.392939 kubelet[3176]: E1124 00:25:25.392903 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.392939 kubelet[3176]: W1124 00:25:25.392911 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.393290 kubelet[3176]: E1124 00:25:25.392921 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.393465 kubelet[3176]: E1124 00:25:25.393387 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.393465 kubelet[3176]: W1124 00:25:25.393397 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.393465 kubelet[3176]: E1124 00:25:25.393410 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.393718 kubelet[3176]: E1124 00:25:25.393711 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.393790 kubelet[3176]: W1124 00:25:25.393756 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.393790 kubelet[3176]: E1124 00:25:25.393767 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.394080 kubelet[3176]: E1124 00:25:25.394012 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.394080 kubelet[3176]: W1124 00:25:25.394040 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.394080 kubelet[3176]: E1124 00:25:25.394050 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.394386 kubelet[3176]: E1124 00:25:25.394375 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.394580 kubelet[3176]: W1124 00:25:25.394524 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.394580 kubelet[3176]: E1124 00:25:25.394539 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.394972 kubelet[3176]: E1124 00:25:25.394923 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.394972 kubelet[3176]: W1124 00:25:25.394934 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.394972 kubelet[3176]: E1124 00:25:25.394944 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.400227 kubelet[3176]: E1124 00:25:25.400198 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.400227 kubelet[3176]: W1124 00:25:25.400208 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.400227 kubelet[3176]: E1124 00:25:25.400216 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.400347 kubelet[3176]: I1124 00:25:25.400320 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgg9\" (UniqueName: \"kubernetes.io/projected/405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae-kube-api-access-ldgg9\") pod \"csi-node-driver-zdsr7\" (UID: \"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae\") " pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:25.400459 kubelet[3176]: E1124 00:25:25.400454 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.400500 kubelet[3176]: W1124 00:25:25.400485 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.400500 kubelet[3176]: E1124 00:25:25.400494 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.400561 kubelet[3176]: I1124 00:25:25.400541 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae-kubelet-dir\") pod \"csi-node-driver-zdsr7\" (UID: \"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae\") " pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:25.400741 kubelet[3176]: E1124 00:25:25.400712 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.400838 kubelet[3176]: W1124 00:25:25.400798 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.400895 kubelet[3176]: E1124 00:25:25.400880 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.401256 kubelet[3176]: E1124 00:25:25.401172 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.401256 kubelet[3176]: W1124 00:25:25.401184 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.401460 kubelet[3176]: E1124 00:25:25.401402 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.402352 kubelet[3176]: E1124 00:25:25.402306 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.402352 kubelet[3176]: W1124 00:25:25.402321 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.402352 kubelet[3176]: E1124 00:25:25.402333 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.402554 kubelet[3176]: I1124 00:25:25.402514 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae-registration-dir\") pod \"csi-node-driver-zdsr7\" (UID: \"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae\") " pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:25.402764 kubelet[3176]: E1124 00:25:25.402724 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.402764 kubelet[3176]: W1124 00:25:25.402739 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.402764 kubelet[3176]: E1124 00:25:25.402751 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.403194 kubelet[3176]: E1124 00:25:25.403070 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.403194 kubelet[3176]: W1124 00:25:25.403102 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.403194 kubelet[3176]: E1124 00:25:25.403114 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.403738 kubelet[3176]: E1124 00:25:25.403699 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.403738 kubelet[3176]: W1124 00:25:25.403712 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.403738 kubelet[3176]: E1124 00:25:25.403724 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.404262 kubelet[3176]: E1124 00:25:25.404197 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.404262 kubelet[3176]: W1124 00:25:25.404218 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.404262 kubelet[3176]: E1124 00:25:25.404230 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.404803 kubelet[3176]: E1124 00:25:25.404724 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.404803 kubelet[3176]: W1124 00:25:25.404736 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.404803 kubelet[3176]: E1124 00:25:25.404747 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.405277 kubelet[3176]: I1124 00:25:25.404990 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae-socket-dir\") pod \"csi-node-driver-zdsr7\" (UID: \"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae\") " pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:25.405900 kubelet[3176]: E1124 00:25:25.405713 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.405900 kubelet[3176]: W1124 00:25:25.405727 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.405900 kubelet[3176]: E1124 00:25:25.405838 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.406302 kubelet[3176]: I1124 00:25:25.406232 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae-varrun\") pod \"csi-node-driver-zdsr7\" (UID: \"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae\") " pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:25.406798 kubelet[3176]: E1124 00:25:25.406748 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.406798 kubelet[3176]: W1124 00:25:25.406758 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.406798 kubelet[3176]: E1124 00:25:25.406769 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.407348 kubelet[3176]: E1124 00:25:25.407248 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.407348 kubelet[3176]: W1124 00:25:25.407282 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.407348 kubelet[3176]: E1124 00:25:25.407294 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.407716 kubelet[3176]: E1124 00:25:25.407707 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.407873 kubelet[3176]: W1124 00:25:25.407864 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.407946 kubelet[3176]: E1124 00:25:25.407937 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.408421 kubelet[3176]: E1124 00:25:25.408334 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.408421 kubelet[3176]: W1124 00:25:25.408347 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.408421 kubelet[3176]: E1124 00:25:25.408357 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.421267 containerd[1712]: time="2025-11-24T00:25:25.421244636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5wqr2,Uid:f8ecae07-143b-451e-9ab6-ad378af8d675,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:25.453825 containerd[1712]: time="2025-11-24T00:25:25.453416085Z" level=info msg="connecting to shim 666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a" address="unix:///run/containerd/s/75dd7b63c0770d8e0f58e9793e18537722cf910b67b285867c78c7c4a88be414" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:25.472274 systemd[1]: Started cri-containerd-666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a.scope - libcontainer container 666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a. Nov 24 00:25:25.490712 containerd[1712]: time="2025-11-24T00:25:25.490677012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5wqr2,Uid:f8ecae07-143b-451e-9ab6-ad378af8d675,Namespace:calico-system,Attempt:0,} returns sandbox id \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\"" Nov 24 00:25:25.507044 kubelet[3176]: E1124 00:25:25.506959 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507044 kubelet[3176]: W1124 00:25:25.506970 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507044 kubelet[3176]: E1124 00:25:25.506979 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507166 kubelet[3176]: E1124 00:25:25.507159 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507235 kubelet[3176]: W1124 00:25:25.507192 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507235 kubelet[3176]: E1124 00:25:25.507199 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507379 kubelet[3176]: E1124 00:25:25.507317 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507379 kubelet[3176]: W1124 00:25:25.507324 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507379 kubelet[3176]: E1124 00:25:25.507329 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507470 kubelet[3176]: E1124 00:25:25.507465 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507507 kubelet[3176]: W1124 00:25:25.507488 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507507 kubelet[3176]: E1124 00:25:25.507493 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507704 kubelet[3176]: E1124 00:25:25.507620 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507704 kubelet[3176]: W1124 00:25:25.507624 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507704 kubelet[3176]: E1124 00:25:25.507629 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507775 kubelet[3176]: E1124 00:25:25.507728 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507775 kubelet[3176]: W1124 00:25:25.507735 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507775 kubelet[3176]: E1124 00:25:25.507743 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507856 kubelet[3176]: E1124 00:25:25.507845 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.507856 kubelet[3176]: W1124 00:25:25.507853 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.507898 kubelet[3176]: E1124 00:25:25.507860 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.507994 kubelet[3176]: E1124 00:25:25.507989 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508092 kubelet[3176]: W1124 00:25:25.508020 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508092 kubelet[3176]: E1124 00:25:25.508026 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508186 kubelet[3176]: E1124 00:25:25.508175 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508186 kubelet[3176]: W1124 00:25:25.508182 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508261 kubelet[3176]: E1124 00:25:25.508188 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508331 kubelet[3176]: E1124 00:25:25.508321 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508331 kubelet[3176]: W1124 00:25:25.508329 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508400 kubelet[3176]: E1124 00:25:25.508336 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508475 kubelet[3176]: E1124 00:25:25.508464 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508475 kubelet[3176]: W1124 00:25:25.508472 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508534 kubelet[3176]: E1124 00:25:25.508479 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508613 kubelet[3176]: E1124 00:25:25.508602 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508613 kubelet[3176]: W1124 00:25:25.508609 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508692 kubelet[3176]: E1124 00:25:25.508616 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508726 kubelet[3176]: E1124 00:25:25.508716 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508726 kubelet[3176]: W1124 00:25:25.508723 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508773 kubelet[3176]: E1124 00:25:25.508732 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.508848 kubelet[3176]: E1124 00:25:25.508831 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.508848 kubelet[3176]: W1124 00:25:25.508839 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.508848 kubelet[3176]: E1124 00:25:25.508847 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.509080 kubelet[3176]: E1124 00:25:25.509059 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.509080 kubelet[3176]: W1124 00:25:25.509069 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.509080 kubelet[3176]: E1124 00:25:25.509077 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.509291 kubelet[3176]: E1124 00:25:25.509192 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.509291 kubelet[3176]: W1124 00:25:25.509197 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.509291 kubelet[3176]: E1124 00:25:25.509203 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.509448 kubelet[3176]: E1124 00:25:25.509426 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.509560 kubelet[3176]: W1124 00:25:25.509487 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.509560 kubelet[3176]: E1124 00:25:25.509498 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.509750 kubelet[3176]: E1124 00:25:25.509736 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.509778 kubelet[3176]: W1124 00:25:25.509751 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.509778 kubelet[3176]: E1124 00:25:25.509762 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.510004 kubelet[3176]: E1124 00:25:25.509984 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.510004 kubelet[3176]: W1124 00:25:25.509996 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.510187 kubelet[3176]: E1124 00:25:25.510006 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.510389 kubelet[3176]: E1124 00:25:25.510378 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.510425 kubelet[3176]: W1124 00:25:25.510389 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.510425 kubelet[3176]: E1124 00:25:25.510399 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.510639 kubelet[3176]: E1124 00:25:25.510544 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.510639 kubelet[3176]: W1124 00:25:25.510551 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.510639 kubelet[3176]: E1124 00:25:25.510558 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.510799 kubelet[3176]: E1124 00:25:25.510783 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.510799 kubelet[3176]: W1124 00:25:25.510793 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.510862 kubelet[3176]: E1124 00:25:25.510802 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.511631 kubelet[3176]: E1124 00:25:25.511447 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.511953 kubelet[3176]: W1124 00:25:25.511889 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.511953 kubelet[3176]: E1124 00:25:25.511908 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.512346 kubelet[3176]: E1124 00:25:25.512331 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.514232 kubelet[3176]: W1124 00:25:25.512347 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.515293 kubelet[3176]: E1124 00:25:25.515271 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.515731 kubelet[3176]: E1124 00:25:25.515555 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.515731 kubelet[3176]: W1124 00:25:25.515564 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.515731 kubelet[3176]: E1124 00:25:25.515574 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:25.523246 kubelet[3176]: E1124 00:25:25.522652 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:25.523324 kubelet[3176]: W1124 00:25:25.522663 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:25.523363 kubelet[3176]: E1124 00:25:25.523330 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:26.880579 kubelet[3176]: E1124 00:25:26.880531 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:26.966268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222095268.mount: Deactivated successfully. Nov 24 00:25:28.043946 containerd[1712]: time="2025-11-24T00:25:28.043875813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:28.046123 containerd[1712]: time="2025-11-24T00:25:28.046095151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:25:28.048247 containerd[1712]: time="2025-11-24T00:25:28.048208145Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:28.051123 containerd[1712]: time="2025-11-24T00:25:28.051089268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:28.051380 containerd[1712]: time="2025-11-24T00:25:28.051360272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.692759441s" Nov 24 00:25:28.051415 containerd[1712]: time="2025-11-24T00:25:28.051387044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:25:28.052285 containerd[1712]: time="2025-11-24T00:25:28.052260684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:25:28.066311 containerd[1712]: time="2025-11-24T00:25:28.066281791Z" level=info msg="CreateContainer within sandbox \"9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:25:28.107169 containerd[1712]: time="2025-11-24T00:25:28.105590369Z" level=info msg="Container 9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:28.123156 containerd[1712]: time="2025-11-24T00:25:28.123129575Z" level=info msg="CreateContainer within sandbox \"9834f1269f223f3ab48181b83458bcf489f3051d99e525998b548dd1c40bccbf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb\"" Nov 24 00:25:28.123603 containerd[1712]: time="2025-11-24T00:25:28.123518838Z" level=info msg="StartContainer for \"9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb\"" Nov 24 00:25:28.124649 containerd[1712]: time="2025-11-24T00:25:28.124623371Z" level=info msg="connecting to shim 9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb" address="unix:///run/containerd/s/656272cf2d6bfa77fa62e2cdeb54d4527006aa08dd6ee8920f5864fac94dc326" protocol=ttrpc version=3 Nov 24 00:25:28.141316 systemd[1]: Started cri-containerd-9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb.scope - libcontainer container 9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb. Nov 24 00:25:28.187423 containerd[1712]: time="2025-11-24T00:25:28.187398823Z" level=info msg="StartContainer for \"9577692111329fe9b3bdb6163f33928cd518bfe46d93e46826d4f1945f4657fb\" returns successfully" Nov 24 00:25:28.880333 kubelet[3176]: E1124 00:25:28.880278 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:28.965677 kubelet[3176]: I1124 00:25:28.965631 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-575c5d7d74-68z8t" podStartSLOduration=2.271516956 podStartE2EDuration="4.965619801s" podCreationTimestamp="2025-11-24 00:25:24 +0000 UTC" firstStartedPulling="2025-11-24 00:25:25.357952554 +0000 UTC m=+19.563119596" lastFinishedPulling="2025-11-24 00:25:28.052055405 +0000 UTC m=+22.257222441" observedRunningTime="2025-11-24 00:25:28.965434011 +0000 UTC m=+23.170601056" watchObservedRunningTime="2025-11-24 00:25:28.965619801 +0000 UTC m=+23.170786849" Nov 24 00:25:29.019604 kubelet[3176]: E1124 00:25:29.019563 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.019741 kubelet[3176]: W1124 00:25:29.019612 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.019741 kubelet[3176]: E1124 00:25:29.019629 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.019816 kubelet[3176]: E1124 00:25:29.019768 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.019816 kubelet[3176]: W1124 00:25:29.019775 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.019816 kubelet[3176]: E1124 00:25:29.019782 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.019902 kubelet[3176]: E1124 00:25:29.019886 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.019902 kubelet[3176]: W1124 00:25:29.019891 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.019961 kubelet[3176]: E1124 00:25:29.019897 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020125 kubelet[3176]: E1124 00:25:29.020102 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020125 kubelet[3176]: W1124 00:25:29.020109 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020125 kubelet[3176]: E1124 00:25:29.020116 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020252 kubelet[3176]: E1124 00:25:29.020242 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020252 kubelet[3176]: W1124 00:25:29.020250 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020306 kubelet[3176]: E1124 00:25:29.020257 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020350 kubelet[3176]: E1124 00:25:29.020341 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020350 kubelet[3176]: W1124 00:25:29.020348 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020403 kubelet[3176]: E1124 00:25:29.020353 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020451 kubelet[3176]: E1124 00:25:29.020439 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020451 kubelet[3176]: W1124 00:25:29.020444 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020538 kubelet[3176]: E1124 00:25:29.020450 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020538 kubelet[3176]: E1124 00:25:29.020531 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020538 kubelet[3176]: W1124 00:25:29.020536 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020619 kubelet[3176]: E1124 00:25:29.020541 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020643 kubelet[3176]: E1124 00:25:29.020630 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020643 kubelet[3176]: W1124 00:25:29.020635 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020643 kubelet[3176]: E1124 00:25:29.020640 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020751 kubelet[3176]: E1124 00:25:29.020720 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020751 kubelet[3176]: W1124 00:25:29.020725 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020751 kubelet[3176]: E1124 00:25:29.020730 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020840 kubelet[3176]: E1124 00:25:29.020814 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020840 kubelet[3176]: W1124 00:25:29.020818 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020840 kubelet[3176]: E1124 00:25:29.020824 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.020929 kubelet[3176]: E1124 00:25:29.020906 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.020929 kubelet[3176]: W1124 00:25:29.020910 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.020929 kubelet[3176]: E1124 00:25:29.020915 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.021011 kubelet[3176]: E1124 00:25:29.021004 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.021011 kubelet[3176]: W1124 00:25:29.021009 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.021070 kubelet[3176]: E1124 00:25:29.021014 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.021100 kubelet[3176]: E1124 00:25:29.021098 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.021124 kubelet[3176]: W1124 00:25:29.021102 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.021124 kubelet[3176]: E1124 00:25:29.021108 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.021216 kubelet[3176]: E1124 00:25:29.021201 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.021216 kubelet[3176]: W1124 00:25:29.021206 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.021216 kubelet[3176]: E1124 00:25:29.021211 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.028495 kubelet[3176]: E1124 00:25:29.028476 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.028495 kubelet[3176]: W1124 00:25:29.028489 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.028608 kubelet[3176]: E1124 00:25:29.028500 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.028660 kubelet[3176]: E1124 00:25:29.028649 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.028660 kubelet[3176]: W1124 00:25:29.028657 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.028721 kubelet[3176]: E1124 00:25:29.028664 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.028838 kubelet[3176]: E1124 00:25:29.028825 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.028866 kubelet[3176]: W1124 00:25:29.028841 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.028866 kubelet[3176]: E1124 00:25:29.028847 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029038 kubelet[3176]: E1124 00:25:29.029020 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029038 kubelet[3176]: W1124 00:25:29.029036 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029098 kubelet[3176]: E1124 00:25:29.029042 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029209 kubelet[3176]: E1124 00:25:29.029198 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029209 kubelet[3176]: W1124 00:25:29.029206 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029266 kubelet[3176]: E1124 00:25:29.029212 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029335 kubelet[3176]: E1124 00:25:29.029317 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029335 kubelet[3176]: W1124 00:25:29.029333 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029393 kubelet[3176]: E1124 00:25:29.029339 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029497 kubelet[3176]: E1124 00:25:29.029469 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029552 kubelet[3176]: W1124 00:25:29.029542 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029575 kubelet[3176]: E1124 00:25:29.029551 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029766 kubelet[3176]: E1124 00:25:29.029705 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029766 kubelet[3176]: W1124 00:25:29.029712 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029766 kubelet[3176]: E1124 00:25:29.029718 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029857 kubelet[3176]: E1124 00:25:29.029791 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029857 kubelet[3176]: W1124 00:25:29.029796 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029857 kubelet[3176]: E1124 00:25:29.029801 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029919 kubelet[3176]: E1124 00:25:29.029865 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029919 kubelet[3176]: W1124 00:25:29.029869 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029919 kubelet[3176]: E1124 00:25:29.029874 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.029987 kubelet[3176]: E1124 00:25:29.029955 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.029987 kubelet[3176]: W1124 00:25:29.029960 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.029987 kubelet[3176]: E1124 00:25:29.029965 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.030269 kubelet[3176]: E1124 00:25:29.030174 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.030269 kubelet[3176]: W1124 00:25:29.030186 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.030269 kubelet[3176]: E1124 00:25:29.030196 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.030385 kubelet[3176]: E1124 00:25:29.030306 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.030385 kubelet[3176]: W1124 00:25:29.030311 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.030385 kubelet[3176]: E1124 00:25:29.030317 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.030715 kubelet[3176]: E1124 00:25:29.030690 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.030715 kubelet[3176]: W1124 00:25:29.030712 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.030788 kubelet[3176]: E1124 00:25:29.030720 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.030864 kubelet[3176]: E1124 00:25:29.030837 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.030887 kubelet[3176]: W1124 00:25:29.030863 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.030887 kubelet[3176]: E1124 00:25:29.030870 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.031022 kubelet[3176]: E1124 00:25:29.031010 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.031022 kubelet[3176]: W1124 00:25:29.031019 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.031077 kubelet[3176]: E1124 00:25:29.031027 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.031321 kubelet[3176]: E1124 00:25:29.031291 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.031321 kubelet[3176]: W1124 00:25:29.031317 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.031382 kubelet[3176]: E1124 00:25:29.031325 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.031473 kubelet[3176]: E1124 00:25:29.031450 3176 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:25:29.031473 kubelet[3176]: W1124 00:25:29.031470 3176 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:25:29.031535 kubelet[3176]: E1124 00:25:29.031477 3176 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:25:29.767736 containerd[1712]: time="2025-11-24T00:25:29.767697094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:29.770825 containerd[1712]: time="2025-11-24T00:25:29.770791739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:25:29.773998 containerd[1712]: time="2025-11-24T00:25:29.773960681Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:29.777104 containerd[1712]: time="2025-11-24T00:25:29.777064823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:29.777644 containerd[1712]: time="2025-11-24T00:25:29.777384752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.725095174s" Nov 24 00:25:29.777644 containerd[1712]: time="2025-11-24T00:25:29.777412396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:25:29.782819 containerd[1712]: time="2025-11-24T00:25:29.782785354Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:25:29.803754 containerd[1712]: time="2025-11-24T00:25:29.803692270Z" level=info msg="Container 40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:29.850084 containerd[1712]: time="2025-11-24T00:25:29.850059165Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9\"" Nov 24 00:25:29.850627 containerd[1712]: time="2025-11-24T00:25:29.850609206Z" level=info msg="StartContainer for \"40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9\"" Nov 24 00:25:29.851822 containerd[1712]: time="2025-11-24T00:25:29.851773191Z" level=info msg="connecting to shim 40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9" address="unix:///run/containerd/s/75dd7b63c0770d8e0f58e9793e18537722cf910b67b285867c78c7c4a88be414" protocol=ttrpc version=3 Nov 24 00:25:29.877302 systemd[1]: Started cri-containerd-40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9.scope - libcontainer container 40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9. Nov 24 00:25:29.942210 containerd[1712]: time="2025-11-24T00:25:29.942188155Z" level=info msg="StartContainer for \"40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9\" returns successfully" Nov 24 00:25:29.946961 systemd[1]: cri-containerd-40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9.scope: Deactivated successfully. Nov 24 00:25:29.950722 containerd[1712]: time="2025-11-24T00:25:29.950357159Z" level=info msg="received container exit event container_id:\"40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9\" id:\"40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9\" pid:3873 exited_at:{seconds:1763943929 nanos:948782246}" Nov 24 00:25:29.957903 kubelet[3176]: I1124 00:25:29.957883 3176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:25:29.970825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40a71a6cddecbace7e402bd160bcff59810e3db2e0c5bbc16dd19ba63443b8b9-rootfs.mount: Deactivated successfully. Nov 24 00:25:30.880855 kubelet[3176]: E1124 00:25:30.880813 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:31.141462 kubelet[3176]: I1124 00:25:31.141337 3176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:25:32.880035 kubelet[3176]: E1124 00:25:32.880000 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:32.964610 containerd[1712]: time="2025-11-24T00:25:32.964354173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:25:34.880143 kubelet[3176]: E1124 00:25:34.880101 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:36.880519 kubelet[3176]: E1124 00:25:36.880333 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:37.224424 containerd[1712]: time="2025-11-24T00:25:37.224331226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:37.225934 containerd[1712]: time="2025-11-24T00:25:37.225904939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:25:37.232050 containerd[1712]: time="2025-11-24T00:25:37.232010621Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:37.240565 containerd[1712]: time="2025-11-24T00:25:37.240518934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:37.241057 containerd[1712]: time="2025-11-24T00:25:37.240974592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.276584961s" Nov 24 00:25:37.241057 containerd[1712]: time="2025-11-24T00:25:37.241000916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:25:37.247036 containerd[1712]: time="2025-11-24T00:25:37.247010024Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:25:37.267088 containerd[1712]: time="2025-11-24T00:25:37.266046680Z" level=info msg="Container ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:37.282899 containerd[1712]: time="2025-11-24T00:25:37.282873521Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef\"" Nov 24 00:25:37.283228 containerd[1712]: time="2025-11-24T00:25:37.283208001Z" level=info msg="StartContainer for \"ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef\"" Nov 24 00:25:37.284196 containerd[1712]: time="2025-11-24T00:25:37.284165929Z" level=info msg="connecting to shim ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef" address="unix:///run/containerd/s/75dd7b63c0770d8e0f58e9793e18537722cf910b67b285867c78c7c4a88be414" protocol=ttrpc version=3 Nov 24 00:25:37.306301 systemd[1]: Started cri-containerd-ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef.scope - libcontainer container ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef. Nov 24 00:25:37.372084 containerd[1712]: time="2025-11-24T00:25:37.372020554Z" level=info msg="StartContainer for \"ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef\" returns successfully" Nov 24 00:25:38.450113 containerd[1712]: time="2025-11-24T00:25:38.450064609Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:25:38.452460 systemd[1]: cri-containerd-ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef.scope: Deactivated successfully. Nov 24 00:25:38.452708 systemd[1]: cri-containerd-ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef.scope: Consumed 372ms CPU time, 193.4M memory peak, 171.3M written to disk. Nov 24 00:25:38.453113 containerd[1712]: time="2025-11-24T00:25:38.453087287Z" level=info msg="received container exit event container_id:\"ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef\" id:\"ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef\" pid:3936 exited_at:{seconds:1763943938 nanos:452358194}" Nov 24 00:25:38.473100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef4d1f2b93e125b5a024c10180289b2b083168dc8a188e81a7706255360791ef-rootfs.mount: Deactivated successfully. Nov 24 00:25:38.487703 kubelet[3176]: I1124 00:25:38.487675 3176 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:25:38.786216 kubelet[3176]: I1124 00:25:38.786135 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6b0376e-593a-409d-bac3-21945844d4a4-config-volume\") pod \"coredns-674b8bbfcf-dtd9z\" (UID: \"b6b0376e-593a-409d-bac3-21945844d4a4\") " pod="kube-system/coredns-674b8bbfcf-dtd9z" Nov 24 00:25:38.786216 kubelet[3176]: I1124 00:25:38.786192 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxstz\" (UniqueName: \"kubernetes.io/projected/b6b0376e-593a-409d-bac3-21945844d4a4-kube-api-access-zxstz\") pod \"coredns-674b8bbfcf-dtd9z\" (UID: \"b6b0376e-593a-409d-bac3-21945844d4a4\") " pod="kube-system/coredns-674b8bbfcf-dtd9z" Nov 24 00:25:38.853146 systemd[1]: Created slice kubepods-burstable-podb6b0376e_593a_409d_bac3_21945844d4a4.slice - libcontainer container kubepods-burstable-podb6b0376e_593a_409d_bac3_21945844d4a4.slice. Nov 24 00:25:38.893619 kubelet[3176]: I1124 00:25:38.887181 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a-config-volume\") pod \"coredns-674b8bbfcf-ks62v\" (UID: \"a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a\") " pod="kube-system/coredns-674b8bbfcf-ks62v" Nov 24 00:25:38.893619 kubelet[3176]: I1124 00:25:38.887210 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmw8k\" (UniqueName: \"kubernetes.io/projected/a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a-kube-api-access-cmw8k\") pod \"coredns-674b8bbfcf-ks62v\" (UID: \"a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a\") " pod="kube-system/coredns-674b8bbfcf-ks62v" Nov 24 00:25:39.017900 systemd[1]: Created slice kubepods-besteffort-pod405d9e27_2783_4e51_8c7a_b9ed2ffdd4ae.slice - libcontainer container kubepods-besteffort-pod405d9e27_2783_4e51_8c7a_b9ed2ffdd4ae.slice. Nov 24 00:25:39.019814 containerd[1712]: time="2025-11-24T00:25:39.019756621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdsr7,Uid:405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:39.088792 kubelet[3176]: I1124 00:25:39.088533 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsv4\" (UniqueName: \"kubernetes.io/projected/75ee8f86-4798-48a9-84fa-9fab492c51e9-kube-api-access-jnsv4\") pod \"calico-apiserver-6d8bbff79b-r9q7n\" (UID: \"75ee8f86-4798-48a9-84fa-9fab492c51e9\") " pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" Nov 24 00:25:39.088792 kubelet[3176]: I1124 00:25:39.088574 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75ee8f86-4798-48a9-84fa-9fab492c51e9-calico-apiserver-certs\") pod \"calico-apiserver-6d8bbff79b-r9q7n\" (UID: \"75ee8f86-4798-48a9-84fa-9fab492c51e9\") " pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" Nov 24 00:25:39.188957 kubelet[3176]: E1124 00:25:39.188816 3176 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: object "calico-apiserver"/"calico-apiserver-certs" not registered Nov 24 00:25:39.188957 kubelet[3176]: E1124 00:25:39.188877 3176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75ee8f86-4798-48a9-84fa-9fab492c51e9-calico-apiserver-certs podName:75ee8f86-4798-48a9-84fa-9fab492c51e9 nodeName:}" failed. No retries permitted until 2025-11-24 00:25:39.688861484 +0000 UTC m=+33.894028536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/75ee8f86-4798-48a9-84fa-9fab492c51e9-calico-apiserver-certs") pod "calico-apiserver-6d8bbff79b-r9q7n" (UID: "75ee8f86-4798-48a9-84fa-9fab492c51e9") : object "calico-apiserver"/"calico-apiserver-certs" not registered Nov 24 00:25:39.288147 kubelet[3176]: E1124 00:25:39.201614 3176 projected.go:289] Couldn't get configMap calico-apiserver/kube-root-ca.crt: object "calico-apiserver"/"kube-root-ca.crt" not registered Nov 24 00:25:39.288147 kubelet[3176]: E1124 00:25:39.201631 3176 projected.go:194] Error preparing data for projected volume kube-api-access-jnsv4 for pod calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n: object "calico-apiserver"/"kube-root-ca.crt" not registered Nov 24 00:25:39.288147 kubelet[3176]: E1124 00:25:39.201665 3176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ee8f86-4798-48a9-84fa-9fab492c51e9-kube-api-access-jnsv4 podName:75ee8f86-4798-48a9-84fa-9fab492c51e9 nodeName:}" failed. No retries permitted until 2025-11-24 00:25:39.701652899 +0000 UTC m=+33.906819944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jnsv4" (UniqueName: "kubernetes.io/projected/75ee8f86-4798-48a9-84fa-9fab492c51e9-kube-api-access-jnsv4") pod "calico-apiserver-6d8bbff79b-r9q7n" (UID: "75ee8f86-4798-48a9-84fa-9fab492c51e9") : object "calico-apiserver"/"kube-root-ca.crt" not registered Nov 24 00:25:39.288645 containerd[1712]: time="2025-11-24T00:25:39.288613411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtd9z,Uid:b6b0376e-593a-409d-bac3-21945844d4a4,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:39.297087 systemd[1]: Created slice kubepods-besteffort-pod54b044c1_6007_4258_8bbb_52327ebce247.slice - libcontainer container kubepods-besteffort-pod54b044c1_6007_4258_8bbb_52327ebce247.slice. Nov 24 00:25:39.302503 systemd[1]: Created slice kubepods-besteffort-pod75ee8f86_4798_48a9_84fa_9fab492c51e9.slice - libcontainer container kubepods-besteffort-pod75ee8f86_4798_48a9_84fa_9fab492c51e9.slice. Nov 24 00:25:39.369089 systemd[1]: Created slice kubepods-burstable-poda7cdb8e2_b9cd_48e8_913a_1a9a9f053c7a.slice - libcontainer container kubepods-burstable-poda7cdb8e2_b9cd_48e8_913a_1a9a9f053c7a.slice. Nov 24 00:25:39.379457 containerd[1712]: time="2025-11-24T00:25:39.379141630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ks62v,Uid:a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:39.380954 systemd[1]: Created slice kubepods-besteffort-pod6fa98e89_de7e_4aff_a7d8_ed455ce756f9.slice - libcontainer container kubepods-besteffort-pod6fa98e89_de7e_4aff_a7d8_ed455ce756f9.slice. Nov 24 00:25:39.390641 kubelet[3176]: I1124 00:25:39.390513 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b044c1-6007-4258-8bbb-52327ebce247-whisker-backend-key-pair\") pod \"whisker-58d8b4664-4t8wz\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " pod="calico-system/whisker-58d8b4664-4t8wz" Nov 24 00:25:39.390641 kubelet[3176]: I1124 00:25:39.390542 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fa98e89-de7e-4aff-a7d8-ed455ce756f9-calico-apiserver-certs\") pod \"calico-apiserver-6d8bbff79b-qmsbd\" (UID: \"6fa98e89-de7e-4aff-a7d8-ed455ce756f9\") " pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" Nov 24 00:25:39.390641 kubelet[3176]: I1124 00:25:39.390560 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6-tigera-ca-bundle\") pod \"calico-kube-controllers-757ffb85c9-k5zc7\" (UID: \"2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6\") " pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" Nov 24 00:25:39.390641 kubelet[3176]: I1124 00:25:39.390576 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bcd41b-67c8-425f-834c-8c6ed20d39b0-config\") pod \"goldmane-666569f655-77x2q\" (UID: \"f3bcd41b-67c8-425f-834c-8c6ed20d39b0\") " pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.390641 kubelet[3176]: I1124 00:25:39.390606 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f3bcd41b-67c8-425f-834c-8c6ed20d39b0-goldmane-key-pair\") pod \"goldmane-666569f655-77x2q\" (UID: \"f3bcd41b-67c8-425f-834c-8c6ed20d39b0\") " pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.391436 kubelet[3176]: I1124 00:25:39.391098 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b044c1-6007-4258-8bbb-52327ebce247-whisker-ca-bundle\") pod \"whisker-58d8b4664-4t8wz\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " pod="calico-system/whisker-58d8b4664-4t8wz" Nov 24 00:25:39.391436 kubelet[3176]: I1124 00:25:39.391123 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfn4k\" (UniqueName: \"kubernetes.io/projected/2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6-kube-api-access-zfn4k\") pod \"calico-kube-controllers-757ffb85c9-k5zc7\" (UID: \"2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6\") " pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" Nov 24 00:25:39.391436 kubelet[3176]: I1124 00:25:39.391139 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3bcd41b-67c8-425f-834c-8c6ed20d39b0-goldmane-ca-bundle\") pod \"goldmane-666569f655-77x2q\" (UID: \"f3bcd41b-67c8-425f-834c-8c6ed20d39b0\") " pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.391436 kubelet[3176]: I1124 00:25:39.391189 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gdm\" (UniqueName: \"kubernetes.io/projected/54b044c1-6007-4258-8bbb-52327ebce247-kube-api-access-p2gdm\") pod \"whisker-58d8b4664-4t8wz\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " pod="calico-system/whisker-58d8b4664-4t8wz" Nov 24 00:25:39.391436 kubelet[3176]: I1124 00:25:39.391211 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52vr\" (UniqueName: \"kubernetes.io/projected/6fa98e89-de7e-4aff-a7d8-ed455ce756f9-kube-api-access-f52vr\") pod \"calico-apiserver-6d8bbff79b-qmsbd\" (UID: \"6fa98e89-de7e-4aff-a7d8-ed455ce756f9\") " pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" Nov 24 00:25:39.392323 kubelet[3176]: I1124 00:25:39.391856 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7tr\" (UniqueName: \"kubernetes.io/projected/f3bcd41b-67c8-425f-834c-8c6ed20d39b0-kube-api-access-vr7tr\") pod \"goldmane-666569f655-77x2q\" (UID: \"f3bcd41b-67c8-425f-834c-8c6ed20d39b0\") " pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.400490 systemd[1]: Created slice kubepods-besteffort-podf3bcd41b_67c8_425f_834c_8c6ed20d39b0.slice - libcontainer container kubepods-besteffort-podf3bcd41b_67c8_425f_834c_8c6ed20d39b0.slice. Nov 24 00:25:39.408192 systemd[1]: Created slice kubepods-besteffort-pod2e86d1f8_53a9_4c14_a8cb_ffab2ee1ecb6.slice - libcontainer container kubepods-besteffort-pod2e86d1f8_53a9_4c14_a8cb_ffab2ee1ecb6.slice. Nov 24 00:25:39.469250 containerd[1712]: time="2025-11-24T00:25:39.469202575Z" level=error msg="Failed to destroy network for sandbox \"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.472944 containerd[1712]: time="2025-11-24T00:25:39.472901225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdsr7,Uid:405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.473232 kubelet[3176]: E1124 00:25:39.473075 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.473232 kubelet[3176]: E1124 00:25:39.473129 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:39.473537 kubelet[3176]: E1124 00:25:39.473469 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdsr7" Nov 24 00:25:39.473615 kubelet[3176]: E1124 00:25:39.473598 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f7199bf9b2f257302e2dd64e905ae8e67281432bd678eaaf2a4e48bf19b3ecf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:39.474290 systemd[1]: run-netns-cni\x2dc19dae5e\x2d1cac\x2de5c9\x2d9115\x2d36c61c24e48e.mount: Deactivated successfully. Nov 24 00:25:39.484414 containerd[1712]: time="2025-11-24T00:25:39.482807167Z" level=error msg="Failed to destroy network for sandbox \"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.485387 systemd[1]: run-netns-cni\x2d756f6037\x2d5c7d\x2df3c5\x2d732a\x2dc0bb12777945.mount: Deactivated successfully. Nov 24 00:25:39.487165 containerd[1712]: time="2025-11-24T00:25:39.487106946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtd9z,Uid:b6b0376e-593a-409d-bac3-21945844d4a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.487822 kubelet[3176]: E1124 00:25:39.487559 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.487822 kubelet[3176]: E1124 00:25:39.487614 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dtd9z" Nov 24 00:25:39.487822 kubelet[3176]: E1124 00:25:39.487634 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dtd9z" Nov 24 00:25:39.488623 kubelet[3176]: E1124 00:25:39.487692 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dtd9z_kube-system(b6b0376e-593a-409d-bac3-21945844d4a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dtd9z_kube-system(b6b0376e-593a-409d-bac3-21945844d4a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa8fb56ba0bb504841579650dfc0a9679545894139e0951220afbf220be4b385\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dtd9z" podUID="b6b0376e-593a-409d-bac3-21945844d4a4" Nov 24 00:25:39.490919 containerd[1712]: time="2025-11-24T00:25:39.490836528Z" level=error msg="Failed to destroy network for sandbox \"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.492776 systemd[1]: run-netns-cni\x2dbd15657d\x2dfc07\x2d6ab0\x2df69c\x2d5ca6ce64bded.mount: Deactivated successfully. Nov 24 00:25:39.494851 containerd[1712]: time="2025-11-24T00:25:39.494783601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ks62v,Uid:a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.497168 kubelet[3176]: E1124 00:25:39.496735 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.497168 kubelet[3176]: E1124 00:25:39.496769 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ks62v" Nov 24 00:25:39.497168 kubelet[3176]: E1124 00:25:39.496787 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ks62v" Nov 24 00:25:39.497291 kubelet[3176]: E1124 00:25:39.496918 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ks62v_kube-system(a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ks62v_kube-system(a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b8d19222699c52c79f76c2be3d338802fefa32db8a345dd57fdea33acfeac6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ks62v" podUID="a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a" Nov 24 00:25:39.600499 containerd[1712]: time="2025-11-24T00:25:39.600471902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d8b4664-4t8wz,Uid:54b044c1-6007-4258-8bbb-52327ebce247,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:39.638083 containerd[1712]: time="2025-11-24T00:25:39.638019474Z" level=error msg="Failed to destroy network for sandbox \"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.641342 containerd[1712]: time="2025-11-24T00:25:39.641315495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d8b4664-4t8wz,Uid:54b044c1-6007-4258-8bbb-52327ebce247,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.641750 kubelet[3176]: E1124 00:25:39.641449 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.641750 kubelet[3176]: E1124 00:25:39.641477 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d8b4664-4t8wz" Nov 24 00:25:39.641750 kubelet[3176]: E1124 00:25:39.641492 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d8b4664-4t8wz" Nov 24 00:25:39.641848 kubelet[3176]: E1124 00:25:39.641543 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58d8b4664-4t8wz_calico-system(54b044c1-6007-4258-8bbb-52327ebce247)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58d8b4664-4t8wz_calico-system(54b044c1-6007-4258-8bbb-52327ebce247)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b94a91525021442cff0915b12f7836f86b3f7cd4a114b7933a472f571bab69c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58d8b4664-4t8wz" podUID="54b044c1-6007-4258-8bbb-52327ebce247" Nov 24 00:25:39.689426 containerd[1712]: time="2025-11-24T00:25:39.689405283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-qmsbd,Uid:6fa98e89-de7e-4aff-a7d8-ed455ce756f9,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:25:39.707737 containerd[1712]: time="2025-11-24T00:25:39.707704844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-77x2q,Uid:f3bcd41b-67c8-425f-834c-8c6ed20d39b0,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:39.712850 containerd[1712]: time="2025-11-24T00:25:39.712760922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757ffb85c9-k5zc7,Uid:2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:39.730897 containerd[1712]: time="2025-11-24T00:25:39.730865680Z" level=error msg="Failed to destroy network for sandbox \"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.761551 containerd[1712]: time="2025-11-24T00:25:39.761467761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-qmsbd,Uid:6fa98e89-de7e-4aff-a7d8-ed455ce756f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.761764 kubelet[3176]: E1124 00:25:39.761733 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.761835 kubelet[3176]: E1124 00:25:39.761773 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" Nov 24 00:25:39.761835 kubelet[3176]: E1124 00:25:39.761793 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" Nov 24 00:25:39.761936 kubelet[3176]: E1124 00:25:39.761897 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e8fa115b0319b0000937524f8a008a6719c62337f46338c2d3c806dc48b57e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:25:39.789688 containerd[1712]: time="2025-11-24T00:25:39.789651151Z" level=error msg="Failed to destroy network for sandbox \"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.803432 containerd[1712]: time="2025-11-24T00:25:39.803405474Z" level=error msg="Failed to destroy network for sandbox \"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.807763 containerd[1712]: time="2025-11-24T00:25:39.807709873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-77x2q,Uid:f3bcd41b-67c8-425f-834c-8c6ed20d39b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.807896 kubelet[3176]: E1124 00:25:39.807834 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.807896 kubelet[3176]: E1124 00:25:39.807874 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.807896 kubelet[3176]: E1124 00:25:39.807892 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-77x2q" Nov 24 00:25:39.807971 kubelet[3176]: E1124 00:25:39.807929 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60fff0f0431aacd15faf24e6ced50f1f65f82c341455a6622cbed194824e4fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:25:39.826195 containerd[1712]: time="2025-11-24T00:25:39.826167301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757ffb85c9-k5zc7,Uid:2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.826312 kubelet[3176]: E1124 00:25:39.826290 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.826350 kubelet[3176]: E1124 00:25:39.826334 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" Nov 24 00:25:39.826384 kubelet[3176]: E1124 00:25:39.826351 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" Nov 24 00:25:39.826424 kubelet[3176]: E1124 00:25:39.826403 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9f3027447097600d45066abd67f30baabdc0f225ea991a3df770af38d27fa1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:25:39.905296 containerd[1712]: time="2025-11-24T00:25:39.905168625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-r9q7n,Uid:75ee8f86-4798-48a9-84fa-9fab492c51e9,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:25:39.945259 containerd[1712]: time="2025-11-24T00:25:39.945225937Z" level=error msg="Failed to destroy network for sandbox \"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.947937 containerd[1712]: time="2025-11-24T00:25:39.947906311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-r9q7n,Uid:75ee8f86-4798-48a9-84fa-9fab492c51e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.948113 kubelet[3176]: E1124 00:25:39.948049 3176 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:25:39.948113 kubelet[3176]: E1124 00:25:39.948089 3176 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" Nov 24 00:25:39.948113 kubelet[3176]: E1124 00:25:39.948105 3176 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" Nov 24 00:25:39.948234 kubelet[3176]: E1124 00:25:39.948212 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d649cadfc2e869454edd29f0aee6ff0f3045a7b12e17d8b40df337e5ba26ca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:25:39.980480 containerd[1712]: time="2025-11-24T00:25:39.980290462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:25:47.347263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996814623.mount: Deactivated successfully. Nov 24 00:25:47.382061 containerd[1712]: time="2025-11-24T00:25:47.382015613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:47.384125 containerd[1712]: time="2025-11-24T00:25:47.383998855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:25:47.386269 containerd[1712]: time="2025-11-24T00:25:47.386246684Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:47.389659 containerd[1712]: time="2025-11-24T00:25:47.389634312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:25:47.390083 containerd[1712]: time="2025-11-24T00:25:47.389859148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.409542455s" Nov 24 00:25:47.390083 containerd[1712]: time="2025-11-24T00:25:47.389885818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:25:47.406223 containerd[1712]: time="2025-11-24T00:25:47.406158140Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:25:47.426763 containerd[1712]: time="2025-11-24T00:25:47.426732108Z" level=info msg="Container 6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:47.445371 containerd[1712]: time="2025-11-24T00:25:47.445346263Z" level=info msg="CreateContainer within sandbox \"666c3245b2742d49a183fdc1c87a28ba6eabde003c49e03d3de2eb8c2432093a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463\"" Nov 24 00:25:47.445971 containerd[1712]: time="2025-11-24T00:25:47.445951760Z" level=info msg="StartContainer for \"6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463\"" Nov 24 00:25:47.447137 containerd[1712]: time="2025-11-24T00:25:47.447114174Z" level=info msg="connecting to shim 6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463" address="unix:///run/containerd/s/75dd7b63c0770d8e0f58e9793e18537722cf910b67b285867c78c7c4a88be414" protocol=ttrpc version=3 Nov 24 00:25:47.462325 systemd[1]: Started cri-containerd-6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463.scope - libcontainer container 6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463. Nov 24 00:25:47.531332 containerd[1712]: time="2025-11-24T00:25:47.531299978Z" level=info msg="StartContainer for \"6515ff7cf74517e5e42d291262284ba9057e8920bd9f50c34ee5071a96984463\" returns successfully" Nov 24 00:25:47.784663 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:25:47.784748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:25:47.935734 kubelet[3176]: I1124 00:25:47.935706 3176 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b044c1-6007-4258-8bbb-52327ebce247-whisker-ca-bundle\") pod \"54b044c1-6007-4258-8bbb-52327ebce247\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " Nov 24 00:25:47.936099 kubelet[3176]: I1124 00:25:47.935740 3176 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2gdm\" (UniqueName: \"kubernetes.io/projected/54b044c1-6007-4258-8bbb-52327ebce247-kube-api-access-p2gdm\") pod \"54b044c1-6007-4258-8bbb-52327ebce247\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " Nov 24 00:25:47.936099 kubelet[3176]: I1124 00:25:47.935761 3176 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b044c1-6007-4258-8bbb-52327ebce247-whisker-backend-key-pair\") pod \"54b044c1-6007-4258-8bbb-52327ebce247\" (UID: \"54b044c1-6007-4258-8bbb-52327ebce247\") " Nov 24 00:25:47.937462 kubelet[3176]: I1124 00:25:47.937429 3176 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b044c1-6007-4258-8bbb-52327ebce247-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54b044c1-6007-4258-8bbb-52327ebce247" (UID: "54b044c1-6007-4258-8bbb-52327ebce247"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:25:47.941869 kubelet[3176]: I1124 00:25:47.941838 3176 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b044c1-6007-4258-8bbb-52327ebce247-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54b044c1-6007-4258-8bbb-52327ebce247" (UID: "54b044c1-6007-4258-8bbb-52327ebce247"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:25:47.942378 kubelet[3176]: I1124 00:25:47.942354 3176 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b044c1-6007-4258-8bbb-52327ebce247-kube-api-access-p2gdm" (OuterVolumeSpecName: "kube-api-access-p2gdm") pod "54b044c1-6007-4258-8bbb-52327ebce247" (UID: "54b044c1-6007-4258-8bbb-52327ebce247"). InnerVolumeSpecName "kube-api-access-p2gdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:25:48.005202 systemd[1]: Removed slice kubepods-besteffort-pod54b044c1_6007_4258_8bbb_52327ebce247.slice - libcontainer container kubepods-besteffort-pod54b044c1_6007_4258_8bbb_52327ebce247.slice. Nov 24 00:25:48.024577 kubelet[3176]: I1124 00:25:48.024531 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5wqr2" podStartSLOduration=1.126309467 podStartE2EDuration="23.024517772s" podCreationTimestamp="2025-11-24 00:25:25 +0000 UTC" firstStartedPulling="2025-11-24 00:25:25.492192344 +0000 UTC m=+19.697359392" lastFinishedPulling="2025-11-24 00:25:47.390400646 +0000 UTC m=+41.595567697" observedRunningTime="2025-11-24 00:25:48.022581478 +0000 UTC m=+42.227748528" watchObservedRunningTime="2025-11-24 00:25:48.024517772 +0000 UTC m=+42.229684821" Nov 24 00:25:48.036289 kubelet[3176]: I1124 00:25:48.036215 3176 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b044c1-6007-4258-8bbb-52327ebce247-whisker-ca-bundle\") on node \"ci-4459.1.2-a-d148bafb83\" DevicePath \"\"" Nov 24 00:25:48.036289 kubelet[3176]: I1124 00:25:48.036237 3176 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2gdm\" (UniqueName: \"kubernetes.io/projected/54b044c1-6007-4258-8bbb-52327ebce247-kube-api-access-p2gdm\") on node \"ci-4459.1.2-a-d148bafb83\" DevicePath \"\"" Nov 24 00:25:48.036289 kubelet[3176]: I1124 00:25:48.036246 3176 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b044c1-6007-4258-8bbb-52327ebce247-whisker-backend-key-pair\") on node \"ci-4459.1.2-a-d148bafb83\" DevicePath \"\"" Nov 24 00:25:48.118521 systemd[1]: Created slice kubepods-besteffort-podb71df8d3_9ea3_44ea_a925_922c7dfc69b9.slice - libcontainer container kubepods-besteffort-podb71df8d3_9ea3_44ea_a925_922c7dfc69b9.slice. Nov 24 00:25:48.136447 kubelet[3176]: I1124 00:25:48.136420 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b71df8d3-9ea3-44ea-a925-922c7dfc69b9-whisker-ca-bundle\") pod \"whisker-5ccc86d94b-4njwq\" (UID: \"b71df8d3-9ea3-44ea-a925-922c7dfc69b9\") " pod="calico-system/whisker-5ccc86d94b-4njwq" Nov 24 00:25:48.136525 kubelet[3176]: I1124 00:25:48.136466 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b71df8d3-9ea3-44ea-a925-922c7dfc69b9-whisker-backend-key-pair\") pod \"whisker-5ccc86d94b-4njwq\" (UID: \"b71df8d3-9ea3-44ea-a925-922c7dfc69b9\") " pod="calico-system/whisker-5ccc86d94b-4njwq" Nov 24 00:25:48.136525 kubelet[3176]: I1124 00:25:48.136485 3176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922mz\" (UniqueName: \"kubernetes.io/projected/b71df8d3-9ea3-44ea-a925-922c7dfc69b9-kube-api-access-922mz\") pod \"whisker-5ccc86d94b-4njwq\" (UID: \"b71df8d3-9ea3-44ea-a925-922c7dfc69b9\") " pod="calico-system/whisker-5ccc86d94b-4njwq" Nov 24 00:25:48.347439 systemd[1]: var-lib-kubelet-pods-54b044c1\x2d6007\x2d4258\x2d8bbb\x2d52327ebce247-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2gdm.mount: Deactivated successfully. Nov 24 00:25:48.347529 systemd[1]: var-lib-kubelet-pods-54b044c1\x2d6007\x2d4258\x2d8bbb\x2d52327ebce247-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:25:48.422355 containerd[1712]: time="2025-11-24T00:25:48.422311273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ccc86d94b-4njwq,Uid:b71df8d3-9ea3-44ea-a925-922c7dfc69b9,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:48.526185 systemd-networkd[1331]: cali43f59bf0128: Link UP Nov 24 00:25:48.526967 systemd-networkd[1331]: cali43f59bf0128: Gained carrier Nov 24 00:25:48.540758 containerd[1712]: 2025-11-24 00:25:48.445 [INFO][4289] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:25:48.540758 containerd[1712]: 2025-11-24 00:25:48.453 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0 whisker-5ccc86d94b- calico-system b71df8d3-9ea3-44ea-a925-922c7dfc69b9 898 0 2025-11-24 00:25:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5ccc86d94b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 whisker-5ccc86d94b-4njwq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali43f59bf0128 [] [] }} ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-" Nov 24 00:25:48.540758 containerd[1712]: 2025-11-24 00:25:48.453 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.540758 containerd[1712]: 2025-11-24 00:25:48.473 [INFO][4299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" HandleID="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Workload="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.473 [INFO][4299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" HandleID="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Workload="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"whisker-5ccc86d94b-4njwq", "timestamp":"2025-11-24 00:25:48.473312679 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.473 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.473 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.473 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.479 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.482 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.484 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.486 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.540958 containerd[1712]: 2025-11-24 00:25:48.487 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.487 [INFO][4299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.488 [INFO][4299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59 Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.493 [INFO][4299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.500 [INFO][4299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.193/26] block=192.168.42.192/26 handle="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.500 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.193/26] handle="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.500 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:48.541201 containerd[1712]: 2025-11-24 00:25:48.500 [INFO][4299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.193/26] IPv6=[] ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" HandleID="k8s-pod-network.10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Workload="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.541347 containerd[1712]: 2025-11-24 00:25:48.502 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0", GenerateName:"whisker-5ccc86d94b-", Namespace:"calico-system", SelfLink:"", UID:"b71df8d3-9ea3-44ea-a925-922c7dfc69b9", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ccc86d94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"whisker-5ccc86d94b-4njwq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali43f59bf0128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:48.541347 containerd[1712]: 2025-11-24 00:25:48.502 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.193/32] ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.541447 containerd[1712]: 2025-11-24 00:25:48.502 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43f59bf0128 ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.541447 containerd[1712]: 2025-11-24 00:25:48.526 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.541496 containerd[1712]: 2025-11-24 00:25:48.526 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0", GenerateName:"whisker-5ccc86d94b-", Namespace:"calico-system", SelfLink:"", UID:"b71df8d3-9ea3-44ea-a925-922c7dfc69b9", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ccc86d94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59", Pod:"whisker-5ccc86d94b-4njwq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali43f59bf0128", MAC:"f2:ee:7e:ed:e7:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:48.541559 containerd[1712]: 2025-11-24 00:25:48.539 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" Namespace="calico-system" Pod="whisker-5ccc86d94b-4njwq" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-whisker--5ccc86d94b--4njwq-eth0" Nov 24 00:25:48.570480 containerd[1712]: time="2025-11-24T00:25:48.569746654Z" level=info msg="connecting to shim 10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59" address="unix:///run/containerd/s/3b9dde435ade56500e99c7d2b5f8afb96ca5749a3e0a16df17c01ffa88ecbae9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:48.593283 systemd[1]: Started cri-containerd-10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59.scope - libcontainer container 10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59. Nov 24 00:25:48.629216 containerd[1712]: time="2025-11-24T00:25:48.629176314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ccc86d94b-4njwq,Uid:b71df8d3-9ea3-44ea-a925-922c7dfc69b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"10acff9080eebdaca366ae22a350e1f89a3a95b5c621e4dccc4ae4ba237a4e59\"" Nov 24 00:25:48.630358 containerd[1712]: time="2025-11-24T00:25:48.630337298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:25:49.000000 containerd[1712]: time="2025-11-24T00:25:48.999894782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:49.004816 containerd[1712]: time="2025-11-24T00:25:49.004769842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:25:49.004951 kubelet[3176]: E1124 00:25:49.004920 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:25:49.005566 kubelet[3176]: E1124 00:25:49.004963 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:25:49.005798 containerd[1712]: time="2025-11-24T00:25:49.005208923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:25:49.005885 kubelet[3176]: E1124 00:25:49.005103 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ab0198d2f3f240e9aba684dac3248824,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:49.007349 containerd[1712]: time="2025-11-24T00:25:49.007306344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:25:49.352046 containerd[1712]: time="2025-11-24T00:25:49.351848717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:49.354546 containerd[1712]: time="2025-11-24T00:25:49.354425575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:25:49.354546 containerd[1712]: time="2025-11-24T00:25:49.354472127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:25:49.355376 kubelet[3176]: E1124 00:25:49.354773 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:25:49.355376 kubelet[3176]: E1124 00:25:49.354827 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:25:49.355376 kubelet[3176]: E1124 00:25:49.354952 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:49.356370 kubelet[3176]: E1124 00:25:49.356307 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:25:49.748275 systemd-networkd[1331]: vxlan.calico: Link UP Nov 24 00:25:49.748283 systemd-networkd[1331]: vxlan.calico: Gained carrier Nov 24 00:25:49.882894 kubelet[3176]: I1124 00:25:49.882836 3176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b044c1-6007-4258-8bbb-52327ebce247" path="/var/lib/kubelet/pods/54b044c1-6007-4258-8bbb-52327ebce247/volumes" Nov 24 00:25:50.002076 kubelet[3176]: E1124 00:25:50.001578 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:25:50.547374 systemd-networkd[1331]: cali43f59bf0128: Gained IPv6LL Nov 24 00:25:50.931354 systemd-networkd[1331]: vxlan.calico: Gained IPv6LL Nov 24 00:25:51.881324 containerd[1712]: time="2025-11-24T00:25:51.881256980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-qmsbd,Uid:6fa98e89-de7e-4aff-a7d8-ed455ce756f9,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:25:51.975075 systemd-networkd[1331]: cali2d8683d9b4a: Link UP Nov 24 00:25:51.975934 systemd-networkd[1331]: cali2d8683d9b4a: Gained carrier Nov 24 00:25:52.005231 containerd[1712]: 2025-11-24 00:25:51.919 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0 calico-apiserver-6d8bbff79b- calico-apiserver 6fa98e89-de7e-4aff-a7d8-ed455ce756f9 831 0 2025-11-24 00:25:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8bbff79b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 calico-apiserver-6d8bbff79b-qmsbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d8683d9b4a [] [] }} ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-" Nov 24 00:25:52.005231 containerd[1712]: 2025-11-24 00:25:51.919 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.005231 containerd[1712]: 2025-11-24 00:25:51.941 [INFO][4593] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" HandleID="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.941 [INFO][4593] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" HandleID="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.2-a-d148bafb83", "pod":"calico-apiserver-6d8bbff79b-qmsbd", "timestamp":"2025-11-24 00:25:51.941746139 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.941 [INFO][4593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.941 [INFO][4593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.942 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.946 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.949 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.953 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.954 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.006038 containerd[1712]: 2025-11-24 00:25:51.956 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.956 [INFO][4593] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.957 [INFO][4593] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.960 [INFO][4593] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.971 [INFO][4593] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.194/26] block=192.168.42.192/26 handle="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.971 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.194/26] handle="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.971 [INFO][4593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:52.007098 containerd[1712]: 2025-11-24 00:25:51.971 [INFO][4593] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.194/26] IPv6=[] ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" HandleID="k8s-pod-network.80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.007273 containerd[1712]: 2025-11-24 00:25:51.972 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0", GenerateName:"calico-apiserver-6d8bbff79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6fa98e89-de7e-4aff-a7d8-ed455ce756f9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8bbff79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"calico-apiserver-6d8bbff79b-qmsbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8683d9b4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:52.007336 containerd[1712]: 2025-11-24 00:25:51.972 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.194/32] ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.007336 containerd[1712]: 2025-11-24 00:25:51.972 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d8683d9b4a ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.007336 containerd[1712]: 2025-11-24 00:25:51.976 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.007397 containerd[1712]: 2025-11-24 00:25:51.977 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0", GenerateName:"calico-apiserver-6d8bbff79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6fa98e89-de7e-4aff-a7d8-ed455ce756f9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8bbff79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc", Pod:"calico-apiserver-6d8bbff79b-qmsbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8683d9b4a", MAC:"5a:54:c0:d4:ea:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:52.007452 containerd[1712]: 2025-11-24 00:25:52.003 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-qmsbd" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--qmsbd-eth0" Nov 24 00:25:52.048593 containerd[1712]: time="2025-11-24T00:25:52.048562411Z" level=info msg="connecting to shim 80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc" address="unix:///run/containerd/s/83a34db48f5b1a66d4a8f0f8b6861081d3a7463ce03a8e801067ab37945d617c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:52.070288 systemd[1]: Started cri-containerd-80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc.scope - libcontainer container 80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc. Nov 24 00:25:52.106350 containerd[1712]: time="2025-11-24T00:25:52.106320768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-qmsbd,Uid:6fa98e89-de7e-4aff-a7d8-ed455ce756f9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80a49eb36d2af6713e64bb875dc0ff39a9e6fdab09e880f75cd172818218fecc\"" Nov 24 00:25:52.107480 containerd[1712]: time="2025-11-24T00:25:52.107459649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:25:52.471829 containerd[1712]: time="2025-11-24T00:25:52.471526522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:52.475041 containerd[1712]: time="2025-11-24T00:25:52.474947943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:25:52.475131 containerd[1712]: time="2025-11-24T00:25:52.475028727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:52.475291 kubelet[3176]: E1124 00:25:52.475254 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:52.476227 kubelet[3176]: E1124 00:25:52.475390 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:52.476473 kubelet[3176]: E1124 00:25:52.476365 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f52vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:52.477991 kubelet[3176]: E1124 00:25:52.477745 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:25:52.880794 containerd[1712]: time="2025-11-24T00:25:52.880764096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757ffb85c9-k5zc7,Uid:2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:52.974107 systemd-networkd[1331]: calic12355834b7: Link UP Nov 24 00:25:52.974308 systemd-networkd[1331]: calic12355834b7: Gained carrier Nov 24 00:25:52.989167 containerd[1712]: 2025-11-24 00:25:52.914 [INFO][4656] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0 calico-kube-controllers-757ffb85c9- calico-system 2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6 833 0 2025-11-24 00:25:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:757ffb85c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 calico-kube-controllers-757ffb85c9-k5zc7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic12355834b7 [] [] }} ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-" Nov 24 00:25:52.989167 containerd[1712]: 2025-11-24 00:25:52.915 [INFO][4656] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.989167 containerd[1712]: 2025-11-24 00:25:52.937 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" HandleID="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.938 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" HandleID="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"calico-kube-controllers-757ffb85c9-k5zc7", "timestamp":"2025-11-24 00:25:52.937929576 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.938 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.938 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.938 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.942 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.944 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.947 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.948 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989513 containerd[1712]: 2025-11-24 00:25:52.950 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.950 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.951 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.956 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.966 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.195/26] block=192.168.42.192/26 handle="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.966 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.195/26] handle="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.967 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:52.989806 containerd[1712]: 2025-11-24 00:25:52.967 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.195/26] IPv6=[] ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" HandleID="k8s-pod-network.81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.989974 containerd[1712]: 2025-11-24 00:25:52.968 [INFO][4656] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0", GenerateName:"calico-kube-controllers-757ffb85c9-", Namespace:"calico-system", SelfLink:"", UID:"2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757ffb85c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"calico-kube-controllers-757ffb85c9-k5zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic12355834b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:52.990123 containerd[1712]: 2025-11-24 00:25:52.968 [INFO][4656] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.195/32] ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.990123 containerd[1712]: 2025-11-24 00:25:52.968 [INFO][4656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic12355834b7 ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.990123 containerd[1712]: 2025-11-24 00:25:52.974 [INFO][4656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:52.990216 containerd[1712]: 2025-11-24 00:25:52.975 [INFO][4656] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0", GenerateName:"calico-kube-controllers-757ffb85c9-", Namespace:"calico-system", SelfLink:"", UID:"2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757ffb85c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b", Pod:"calico-kube-controllers-757ffb85c9-k5zc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic12355834b7", MAC:"8a:16:0c:47:20:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:52.990282 containerd[1712]: 2025-11-24 00:25:52.987 [INFO][4656] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" Namespace="calico-system" Pod="calico-kube-controllers-757ffb85c9-k5zc7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--kube--controllers--757ffb85c9--k5zc7-eth0" Nov 24 00:25:53.007860 kubelet[3176]: E1124 00:25:53.007775 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:25:53.028506 containerd[1712]: time="2025-11-24T00:25:53.028478441Z" level=info msg="connecting to shim 81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b" address="unix:///run/containerd/s/679b56897a5504b5c034d275259ee9264fd9c639f5f34bb31c23c7231f75f655" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:53.055442 systemd[1]: Started cri-containerd-81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b.scope - libcontainer container 81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b. Nov 24 00:25:53.095599 containerd[1712]: time="2025-11-24T00:25:53.095575406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757ffb85c9-k5zc7,Uid:2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"81acbd345d09e4bb7cbc4f91175119a3668610e73700ba9d06eaf5479f68844b\"" Nov 24 00:25:53.097645 containerd[1712]: time="2025-11-24T00:25:53.097627493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:25:53.475216 containerd[1712]: time="2025-11-24T00:25:53.475180978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:53.478139 containerd[1712]: time="2025-11-24T00:25:53.478107884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:25:53.478211 containerd[1712]: time="2025-11-24T00:25:53.478194578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:25:53.478390 kubelet[3176]: E1124 00:25:53.478352 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:25:53.478675 kubelet[3176]: E1124 00:25:53.478401 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:25:53.478675 kubelet[3176]: E1124 00:25:53.478540 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:53.480431 kubelet[3176]: E1124 00:25:53.480377 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:25:53.811277 systemd-networkd[1331]: cali2d8683d9b4a: Gained IPv6LL Nov 24 00:25:53.881868 containerd[1712]: time="2025-11-24T00:25:53.881686496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ks62v,Uid:a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:53.881868 containerd[1712]: time="2025-11-24T00:25:53.881813206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-77x2q,Uid:f3bcd41b-67c8-425f-834c-8c6ed20d39b0,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:53.882238 containerd[1712]: time="2025-11-24T00:25:53.882205744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdsr7,Uid:405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae,Namespace:calico-system,Attempt:0,}" Nov 24 00:25:54.012035 kubelet[3176]: E1124 00:25:54.011603 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:25:54.013262 kubelet[3176]: E1124 00:25:54.013182 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:25:54.055584 systemd-networkd[1331]: calid81475f1c40: Link UP Nov 24 00:25:54.056270 systemd-networkd[1331]: calid81475f1c40: Gained carrier Nov 24 00:25:54.070676 containerd[1712]: 2025-11-24 00:25:53.946 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0 coredns-674b8bbfcf- kube-system a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a 830 0 2025-11-24 00:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 coredns-674b8bbfcf-ks62v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid81475f1c40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-" Nov 24 00:25:54.070676 containerd[1712]: 2025-11-24 00:25:53.946 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.070676 containerd[1712]: 2025-11-24 00:25:53.995 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" HandleID="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:53.995 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" HandleID="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"coredns-674b8bbfcf-ks62v", "timestamp":"2025-11-24 00:25:53.99567552 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:53.995 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:53.995 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:53.995 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:54.003 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:54.009 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:54.018 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:54.024 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.070987 containerd[1712]: 2025-11-24 00:25:54.028 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.028 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.031 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.041 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.196/26] block=192.168.42.192/26 handle="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.196/26] handle="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:54.071178 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.196/26] IPv6=[] ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" HandleID="k8s-pod-network.10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.051 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"coredns-674b8bbfcf-ks62v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81475f1c40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.051 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.196/32] ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.051 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid81475f1c40 ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.056 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.056 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c", Pod:"coredns-674b8bbfcf-ks62v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81475f1c40", MAC:"a6:dd:5e:01:93:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.071333 containerd[1712]: 2025-11-24 00:25:54.068 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" Namespace="kube-system" Pod="coredns-674b8bbfcf-ks62v" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--ks62v-eth0" Nov 24 00:25:54.115564 containerd[1712]: time="2025-11-24T00:25:54.115521753Z" level=info msg="connecting to shim 10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c" address="unix:///run/containerd/s/b63b73fe41dbcbdb55e51f6e9cda272bcff9d808205f7f9aaa64dd5bbf811086" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:54.131249 systemd-networkd[1331]: calic12355834b7: Gained IPv6LL Nov 24 00:25:54.143310 systemd[1]: Started cri-containerd-10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c.scope - libcontainer container 10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c. Nov 24 00:25:54.157641 systemd-networkd[1331]: cali868a9461cc4: Link UP Nov 24 00:25:54.158400 systemd-networkd[1331]: cali868a9461cc4: Gained carrier Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:53.963 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0 goldmane-666569f655- calico-system f3bcd41b-67c8-425f-834c-8c6ed20d39b0 832 0 2025-11-24 00:25:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 goldmane-666569f655-77x2q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali868a9461cc4 [] [] }} ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:53.964 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.006 [INFO][4778] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" HandleID="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Workload="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.006 [INFO][4778] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" HandleID="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Workload="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"goldmane-666569f655-77x2q", "timestamp":"2025-11-24 00:25:54.006585795 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.007 [INFO][4778] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4778] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.048 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.110 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.115 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.119 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.121 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.126 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.126 [INFO][4778] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.129 [INFO][4778] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985 Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.134 [INFO][4778] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.144 [INFO][4778] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.197/26] block=192.168.42.192/26 handle="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.145 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.197/26] handle="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.145 [INFO][4778] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:54.180564 containerd[1712]: 2025-11-24 00:25:54.145 [INFO][4778] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.197/26] IPv6=[] ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" HandleID="k8s-pod-network.f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Workload="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.147 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f3bcd41b-67c8-425f-834c-8c6ed20d39b0", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"goldmane-666569f655-77x2q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali868a9461cc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.147 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.197/32] ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.147 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali868a9461cc4 ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.159 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.159 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f3bcd41b-67c8-425f-834c-8c6ed20d39b0", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985", Pod:"goldmane-666569f655-77x2q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali868a9461cc4", MAC:"d6:7c:c6:53:5e:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.181461 containerd[1712]: 2025-11-24 00:25:54.178 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" Namespace="calico-system" Pod="goldmane-666569f655-77x2q" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-goldmane--666569f655--77x2q-eth0" Nov 24 00:25:54.210389 containerd[1712]: time="2025-11-24T00:25:54.210365332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ks62v,Uid:a7cdb8e2-b9cd-48e8-913a-1a9a9f053c7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c\"" Nov 24 00:25:54.220742 containerd[1712]: time="2025-11-24T00:25:54.220055929Z" level=info msg="CreateContainer within sandbox \"10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:25:54.229644 containerd[1712]: time="2025-11-24T00:25:54.229619283Z" level=info msg="connecting to shim f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985" address="unix:///run/containerd/s/f19bee814a4c1a4ec2191b32dcb7755048ead47bc3d88f4515aab9423280f191" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:54.248720 containerd[1712]: time="2025-11-24T00:25:54.247916620Z" level=info msg="Container 6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:54.260727 systemd-networkd[1331]: calif3b050f5bcf: Link UP Nov 24 00:25:54.261302 systemd[1]: Started cri-containerd-f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985.scope - libcontainer container f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985. Nov 24 00:25:54.262495 systemd-networkd[1331]: calif3b050f5bcf: Gained carrier Nov 24 00:25:54.262848 containerd[1712]: time="2025-11-24T00:25:54.262590561Z" level=info msg="CreateContainer within sandbox \"10e93f0b5fda6001b7894b052aaf4745b1bb36be78d289f25371f65652ccc37c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872\"" Nov 24 00:25:54.263542 containerd[1712]: time="2025-11-24T00:25:54.263301640Z" level=info msg="StartContainer for \"6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872\"" Nov 24 00:25:54.264916 containerd[1712]: time="2025-11-24T00:25:54.264651590Z" level=info msg="connecting to shim 6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872" address="unix:///run/containerd/s/b63b73fe41dbcbdb55e51f6e9cda272bcff9d808205f7f9aaa64dd5bbf811086" protocol=ttrpc version=3 Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:53.962 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0 csi-node-driver- calico-system 405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae 703 0 2025-11-24 00:25:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 csi-node-driver-zdsr7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3b050f5bcf [] [] }} ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:53.962 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.010 [INFO][4776] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" HandleID="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Workload="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.010 [INFO][4776] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" HandleID="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Workload="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"csi-node-driver-zdsr7", "timestamp":"2025-11-24 00:25:54.010544655 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.010 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.145 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.145 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.205 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.215 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.224 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.226 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.229 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.229 [INFO][4776] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.230 [INFO][4776] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.237 [INFO][4776] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.254 [INFO][4776] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.198/26] block=192.168.42.192/26 handle="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.254 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.198/26] handle="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.254 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:54.289037 containerd[1712]: 2025-11-24 00:25:54.254 [INFO][4776] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.198/26] IPv6=[] ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" HandleID="k8s-pod-network.1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Workload="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.257 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"csi-node-driver-zdsr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b050f5bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.258 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.198/32] ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.258 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b050f5bcf ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.264 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.267 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e", Pod:"csi-node-driver-zdsr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3b050f5bcf", MAC:"ba:75:d9:ea:62:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:54.291427 containerd[1712]: 2025-11-24 00:25:54.284 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" Namespace="calico-system" Pod="csi-node-driver-zdsr7" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-csi--node--driver--zdsr7-eth0" Nov 24 00:25:54.294364 systemd[1]: Started cri-containerd-6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872.scope - libcontainer container 6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872. Nov 24 00:25:54.343070 containerd[1712]: time="2025-11-24T00:25:54.343005237Z" level=info msg="StartContainer for \"6fe73a25f96bcaa1d13d905672f55959d9b713f3d0c841f9355076413b61a872\" returns successfully" Nov 24 00:25:54.369632 containerd[1712]: time="2025-11-24T00:25:54.369597327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-77x2q,Uid:f3bcd41b-67c8-425f-834c-8c6ed20d39b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f58708d95112cb0cff8b9b17a017eda73fc8ac67e7b4ee1c45804c04c48bd985\"" Nov 24 00:25:54.371820 containerd[1712]: time="2025-11-24T00:25:54.371771603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:25:54.372368 containerd[1712]: time="2025-11-24T00:25:54.372345481Z" level=info msg="connecting to shim 1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e" address="unix:///run/containerd/s/778feb73a04519a07539775147761fa148fb0b7056eff37830844ab6add9caec" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:54.394263 systemd[1]: Started cri-containerd-1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e.scope - libcontainer container 1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e. Nov 24 00:25:54.416362 containerd[1712]: time="2025-11-24T00:25:54.416339929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdsr7,Uid:405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fa9ec96846e4b91d3ebf624518fb7021c5353a5d379ecdc42eb4125aa2bc32e\"" Nov 24 00:25:54.731179 containerd[1712]: time="2025-11-24T00:25:54.731071731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:54.734038 containerd[1712]: time="2025-11-24T00:25:54.733992139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:25:54.734132 containerd[1712]: time="2025-11-24T00:25:54.734001213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:54.734256 kubelet[3176]: E1124 00:25:54.734223 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:25:54.734592 kubelet[3176]: E1124 00:25:54.734271 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:25:54.734592 kubelet[3176]: E1124 00:25:54.734480 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr7tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:54.734958 containerd[1712]: time="2025-11-24T00:25:54.734938822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:25:54.735991 kubelet[3176]: E1124 00:25:54.735962 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:25:54.880962 containerd[1712]: time="2025-11-24T00:25:54.880934049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtd9z,Uid:b6b0376e-593a-409d-bac3-21945844d4a4,Namespace:kube-system,Attempt:0,}" Nov 24 00:25:54.881279 containerd[1712]: time="2025-11-24T00:25:54.880934017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-r9q7n,Uid:75ee8f86-4798-48a9-84fa-9fab492c51e9,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:25:54.999713 systemd-networkd[1331]: cali4c41a495ff8: Link UP Nov 24 00:25:55.003232 systemd-networkd[1331]: cali4c41a495ff8: Gained carrier Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.939 [INFO][4997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0 calico-apiserver-6d8bbff79b- calico-apiserver 75ee8f86-4798-48a9-84fa-9fab492c51e9 828 0 2025-11-24 00:25:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8bbff79b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 calico-apiserver-6d8bbff79b-r9q7n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c41a495ff8 [] [] }} ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.939 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5019] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" HandleID="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5019] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" HandleID="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032aa60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.2-a-d148bafb83", "pod":"calico-apiserver-6d8bbff79b-r9q7n", "timestamp":"2025-11-24 00:25:54.965526767 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.970 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.972 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.976 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.977 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.978 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.978 [INFO][5019] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.979 [INFO][5019] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7 Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.985 [INFO][5019] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5019] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.199/26] block=192.168.42.192/26 handle="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.199/26] handle="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:55.017614 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5019] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.199/26] IPv6=[] ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" HandleID="k8s-pod-network.849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Workload="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:54.995 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0", GenerateName:"calico-apiserver-6d8bbff79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75ee8f86-4798-48a9-84fa-9fab492c51e9", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8bbff79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"calico-apiserver-6d8bbff79b-r9q7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c41a495ff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:54.995 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.199/32] ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:54.995 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c41a495ff8 ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:55.003 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:55.003 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0", GenerateName:"calico-apiserver-6d8bbff79b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75ee8f86-4798-48a9-84fa-9fab492c51e9", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8bbff79b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7", Pod:"calico-apiserver-6d8bbff79b-r9q7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c41a495ff8", MAC:"72:fa:ab:50:64:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:55.018466 containerd[1712]: 2025-11-24 00:25:55.013 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d8bbff79b-r9q7n" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-calico--apiserver--6d8bbff79b--r9q7n-eth0" Nov 24 00:25:55.022954 kubelet[3176]: E1124 00:25:55.022866 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:25:55.027796 kubelet[3176]: E1124 00:25:55.027766 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:25:55.061408 containerd[1712]: time="2025-11-24T00:25:55.060574214Z" level=info msg="connecting to shim 849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7" address="unix:///run/containerd/s/6149cc723c56aefdb0ab6937fa1e0b1b4fb04690d164908aceb1f58e251cd2c6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:55.071508 kubelet[3176]: I1124 00:25:55.071468 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ks62v" podStartSLOduration=44.071454918 podStartE2EDuration="44.071454918s" podCreationTimestamp="2025-11-24 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:55.053699109 +0000 UTC m=+49.258866162" watchObservedRunningTime="2025-11-24 00:25:55.071454918 +0000 UTC m=+49.276621967" Nov 24 00:25:55.092059 containerd[1712]: time="2025-11-24T00:25:55.091949630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:55.095821 containerd[1712]: time="2025-11-24T00:25:55.094484549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:25:55.095890 containerd[1712]: time="2025-11-24T00:25:55.095866872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:25:55.096085 kubelet[3176]: E1124 00:25:55.095955 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:25:55.096085 kubelet[3176]: E1124 00:25:55.095985 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:25:55.096212 kubelet[3176]: E1124 00:25:55.096111 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:55.097337 systemd[1]: Started cri-containerd-849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7.scope - libcontainer container 849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7. Nov 24 00:25:55.100310 containerd[1712]: time="2025-11-24T00:25:55.100072895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:25:55.117992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973149341.mount: Deactivated successfully. Nov 24 00:25:55.129514 systemd-networkd[1331]: cali2877f5633f8: Link UP Nov 24 00:25:55.129623 systemd-networkd[1331]: cali2877f5633f8: Gained carrier Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.935 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0 coredns-674b8bbfcf- kube-system b6b0376e-593a-409d-bac3-21945844d4a4 823 0 2025-11-24 00:25:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.2-a-d148bafb83 coredns-674b8bbfcf-dtd9z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2877f5633f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.935 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.964 [INFO][5016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" HandleID="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.964 [INFO][5016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" HandleID="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b8020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.2-a-d148bafb83", "pod":"coredns-674b8bbfcf-dtd9z", "timestamp":"2025-11-24 00:25:54.964817322 +0000 UTC"}, Hostname:"ci-4459.1.2-a-d148bafb83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.965 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:54.993 [INFO][5016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.2-a-d148bafb83' Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.072 [INFO][5016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.080 [INFO][5016] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.086 [INFO][5016] ipam/ipam.go 511: Trying affinity for 192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.088 [INFO][5016] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.093 [INFO][5016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.093 [INFO][5016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.095 [INFO][5016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.109 [INFO][5016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.124 [INFO][5016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.200/26] block=192.168.42.192/26 handle="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.124 [INFO][5016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.200/26] handle="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" host="ci-4459.1.2-a-d148bafb83" Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.124 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:25:55.145457 containerd[1712]: 2025-11-24 00:25:55.124 [INFO][5016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.200/26] IPv6=[] ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" HandleID="k8s-pod-network.7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Workload="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.125 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6b0376e-593a-409d-bac3-21945844d4a4", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"", Pod:"coredns-674b8bbfcf-dtd9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2877f5633f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.125 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.200/32] ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.125 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2877f5633f8 ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.127 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.128 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6b0376e-593a-409d-bac3-21945844d4a4", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.2-a-d148bafb83", ContainerID:"7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a", Pod:"coredns-674b8bbfcf-dtd9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2877f5633f8", MAC:"32:fe:47:c2:27:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:25:55.145946 containerd[1712]: 2025-11-24 00:25:55.143 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" Namespace="kube-system" Pod="coredns-674b8bbfcf-dtd9z" WorkloadEndpoint="ci--4459.1.2--a--d148bafb83-k8s-coredns--674b8bbfcf--dtd9z-eth0" Nov 24 00:25:55.184484 containerd[1712]: time="2025-11-24T00:25:55.184374686Z" level=info msg="connecting to shim 7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a" address="unix:///run/containerd/s/d703bbb59803d6286eed89175bcefab670cc2f91f3590d71485419cc14d4bb15" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:25:55.211325 systemd[1]: Started cri-containerd-7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a.scope - libcontainer container 7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a. Nov 24 00:25:55.220443 containerd[1712]: time="2025-11-24T00:25:55.220313793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8bbff79b-r9q7n,Uid:75ee8f86-4798-48a9-84fa-9fab492c51e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"849b01226cd4caa6bf38145eec919a9c0762ae85fcb6423627282d848e0e15b7\"" Nov 24 00:25:55.249702 containerd[1712]: time="2025-11-24T00:25:55.249628433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtd9z,Uid:b6b0376e-593a-409d-bac3-21945844d4a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a\"" Nov 24 00:25:55.256364 containerd[1712]: time="2025-11-24T00:25:55.256295770Z" level=info msg="CreateContainer within sandbox \"7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:25:55.274873 containerd[1712]: time="2025-11-24T00:25:55.274851463Z" level=info msg="Container b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:25:55.276404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840636477.mount: Deactivated successfully. Nov 24 00:25:55.286225 containerd[1712]: time="2025-11-24T00:25:55.286201675Z" level=info msg="CreateContainer within sandbox \"7610d039e2fa44cdf3c520b7bd907ae512033be93dfb7ed6a11a73ed7757c62a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8\"" Nov 24 00:25:55.286607 containerd[1712]: time="2025-11-24T00:25:55.286566852Z" level=info msg="StartContainer for \"b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8\"" Nov 24 00:25:55.287539 containerd[1712]: time="2025-11-24T00:25:55.287510903Z" level=info msg="connecting to shim b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8" address="unix:///run/containerd/s/d703bbb59803d6286eed89175bcefab670cc2f91f3590d71485419cc14d4bb15" protocol=ttrpc version=3 Nov 24 00:25:55.301282 systemd[1]: Started cri-containerd-b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8.scope - libcontainer container b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8. Nov 24 00:25:55.338929 containerd[1712]: time="2025-11-24T00:25:55.338879564Z" level=info msg="StartContainer for \"b9d022c12a351c240f962965b0ec50e3dd596eab7c8c665bac8e207910d77fc8\" returns successfully" Nov 24 00:25:55.347235 systemd-networkd[1331]: calid81475f1c40: Gained IPv6LL Nov 24 00:25:55.411329 systemd-networkd[1331]: calif3b050f5bcf: Gained IPv6LL Nov 24 00:25:55.475227 systemd-networkd[1331]: cali868a9461cc4: Gained IPv6LL Nov 24 00:25:55.483018 containerd[1712]: time="2025-11-24T00:25:55.482994172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:55.485637 containerd[1712]: time="2025-11-24T00:25:55.485610911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:25:55.485685 containerd[1712]: time="2025-11-24T00:25:55.485621342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:25:55.485789 kubelet[3176]: E1124 00:25:55.485750 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:25:55.485846 kubelet[3176]: E1124 00:25:55.485785 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:25:55.486098 kubelet[3176]: E1124 00:25:55.486059 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:55.487076 containerd[1712]: time="2025-11-24T00:25:55.487043784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:25:55.488294 kubelet[3176]: E1124 00:25:55.488231 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:55.851941 containerd[1712]: time="2025-11-24T00:25:55.851892597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:55.854220 containerd[1712]: time="2025-11-24T00:25:55.854199504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:25:55.854220 containerd[1712]: time="2025-11-24T00:25:55.854233802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:55.854376 kubelet[3176]: E1124 00:25:55.854350 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:55.854613 kubelet[3176]: E1124 00:25:55.854390 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:55.854782 kubelet[3176]: E1124 00:25:55.854746 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnsv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:55.855953 kubelet[3176]: E1124 00:25:55.855926 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:25:56.033896 kubelet[3176]: E1124 00:25:56.033860 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:25:56.034834 kubelet[3176]: E1124 00:25:56.034809 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:25:56.035378 kubelet[3176]: E1124 00:25:56.035345 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:25:56.044186 kubelet[3176]: I1124 00:25:56.043775 3176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dtd9z" podStartSLOduration=45.043763908 podStartE2EDuration="45.043763908s" podCreationTimestamp="2025-11-24 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:25:56.043190741 +0000 UTC m=+50.248357799" watchObservedRunningTime="2025-11-24 00:25:56.043763908 +0000 UTC m=+50.248930952" Nov 24 00:25:56.307310 systemd-networkd[1331]: cali2877f5633f8: Gained IPv6LL Nov 24 00:25:56.627436 systemd-networkd[1331]: cali4c41a495ff8: Gained IPv6LL Nov 24 00:25:57.035179 kubelet[3176]: E1124 00:25:57.034518 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:26:04.882219 containerd[1712]: time="2025-11-24T00:26:04.882096961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:26:05.255116 containerd[1712]: time="2025-11-24T00:26:05.254977063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:05.257265 containerd[1712]: time="2025-11-24T00:26:05.257234283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:26:05.257326 containerd[1712]: time="2025-11-24T00:26:05.257315582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:26:05.257497 kubelet[3176]: E1124 00:26:05.257450 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:26:05.257781 kubelet[3176]: E1124 00:26:05.257516 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:26:05.257810 containerd[1712]: time="2025-11-24T00:26:05.257780459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:26:05.258100 kubelet[3176]: E1124 00:26:05.258068 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ab0198d2f3f240e9aba684dac3248824,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:05.626219 containerd[1712]: time="2025-11-24T00:26:05.626183356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:05.628949 containerd[1712]: time="2025-11-24T00:26:05.628926012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:26:05.629035 containerd[1712]: time="2025-11-24T00:26:05.628993613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:05.629217 kubelet[3176]: E1124 00:26:05.629143 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:05.629273 kubelet[3176]: E1124 00:26:05.629232 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:05.629543 kubelet[3176]: E1124 00:26:05.629478 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f52vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:05.629759 containerd[1712]: time="2025-11-24T00:26:05.629622644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:26:05.631116 kubelet[3176]: E1124 00:26:05.631069 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:26:05.987454 containerd[1712]: time="2025-11-24T00:26:05.987356191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:05.989891 containerd[1712]: time="2025-11-24T00:26:05.989852428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:26:05.990013 containerd[1712]: time="2025-11-24T00:26:05.989864223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:26:05.990100 kubelet[3176]: E1124 00:26:05.990046 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:26:05.990160 kubelet[3176]: E1124 00:26:05.990108 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:26:05.990279 kubelet[3176]: E1124 00:26:05.990248 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:05.991735 kubelet[3176]: E1124 00:26:05.991699 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:26:07.882554 containerd[1712]: time="2025-11-24T00:26:07.882325126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:26:08.257390 containerd[1712]: time="2025-11-24T00:26:08.257243664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:08.260013 containerd[1712]: time="2025-11-24T00:26:08.259983940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:26:08.260013 containerd[1712]: time="2025-11-24T00:26:08.260031219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:08.260192 kubelet[3176]: E1124 00:26:08.260132 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:08.260883 kubelet[3176]: E1124 00:26:08.260193 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:08.260883 kubelet[3176]: E1124 00:26:08.260335 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnsv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:08.261528 kubelet[3176]: E1124 00:26:08.261491 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:26:08.881363 containerd[1712]: time="2025-11-24T00:26:08.881297260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:26:09.240560 containerd[1712]: time="2025-11-24T00:26:09.240454147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:09.243718 containerd[1712]: time="2025-11-24T00:26:09.243674173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:26:09.243816 containerd[1712]: time="2025-11-24T00:26:09.243746821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:26:09.243894 kubelet[3176]: E1124 00:26:09.243850 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:26:09.243948 kubelet[3176]: E1124 00:26:09.243903 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:26:09.244316 kubelet[3176]: E1124 00:26:09.244052 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:09.246312 containerd[1712]: time="2025-11-24T00:26:09.246282292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:26:09.598205 containerd[1712]: time="2025-11-24T00:26:09.598162766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:09.600898 containerd[1712]: time="2025-11-24T00:26:09.600875010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:26:09.600972 containerd[1712]: time="2025-11-24T00:26:09.600928846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:26:09.601063 kubelet[3176]: E1124 00:26:09.601031 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:26:09.601683 kubelet[3176]: E1124 00:26:09.601077 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:26:09.601683 kubelet[3176]: E1124 00:26:09.601215 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:09.602923 kubelet[3176]: E1124 00:26:09.602873 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:26:09.882204 containerd[1712]: time="2025-11-24T00:26:09.881651588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:26:10.217769 containerd[1712]: time="2025-11-24T00:26:10.217642838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:10.220529 containerd[1712]: time="2025-11-24T00:26:10.220494744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:26:10.220621 containerd[1712]: time="2025-11-24T00:26:10.220499635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:26:10.220710 kubelet[3176]: E1124 00:26:10.220675 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:26:10.220783 kubelet[3176]: E1124 00:26:10.220721 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:26:10.221181 kubelet[3176]: E1124 00:26:10.220864 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:10.222048 kubelet[3176]: E1124 00:26:10.222017 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:26:10.881489 containerd[1712]: time="2025-11-24T00:26:10.881263474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:26:11.220455 containerd[1712]: time="2025-11-24T00:26:11.220325284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:11.223418 containerd[1712]: time="2025-11-24T00:26:11.223214834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:11.223588 containerd[1712]: time="2025-11-24T00:26:11.223217592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:26:11.223774 kubelet[3176]: E1124 00:26:11.223736 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:26:11.224333 kubelet[3176]: E1124 00:26:11.224084 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:26:11.224333 kubelet[3176]: E1124 00:26:11.224263 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr7tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:11.225502 kubelet[3176]: E1124 00:26:11.225385 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:26:17.884201 kubelet[3176]: E1124 00:26:17.883055 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:26:18.881333 kubelet[3176]: E1124 00:26:18.881250 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:26:20.882287 kubelet[3176]: E1124 00:26:20.882225 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:26:23.886699 kubelet[3176]: E1124 00:26:23.886660 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:26:24.886319 kubelet[3176]: E1124 00:26:24.886254 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:26:25.882375 kubelet[3176]: E1124 00:26:25.881649 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:26:25.912694 waagent[1894]: 2025-11-24T00:26:25.912640Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Nov 24 00:26:25.919531 waagent[1894]: 2025-11-24T00:26:25.919499Z INFO ExtHandler Nov 24 00:26:25.919620 waagent[1894]: 2025-11-24T00:26:25.919582Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 17dba9c9-b4a2-4514-8daa-0d06aab8935d eTag: 14829309004621092453 source: Fabric] Nov 24 00:26:25.919834 waagent[1894]: 2025-11-24T00:26:25.919809Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 24 00:26:25.920346 waagent[1894]: 2025-11-24T00:26:25.920319Z INFO ExtHandler Nov 24 00:26:25.920409 waagent[1894]: 2025-11-24T00:26:25.920373Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Nov 24 00:26:25.974875 waagent[1894]: 2025-11-24T00:26:25.974849Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 24 00:26:26.035469 waagent[1894]: 2025-11-24T00:26:26.035429Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5C99881696F59195CC854F1016DB901F13704D6', 'hasPrivateKey': True} Nov 24 00:26:26.035985 waagent[1894]: 2025-11-24T00:26:26.035958Z INFO ExtHandler Fetch goal state completed Nov 24 00:26:26.036252 waagent[1894]: 2025-11-24T00:26:26.036232Z INFO ExtHandler ExtHandler Nov 24 00:26:26.036292 waagent[1894]: 2025-11-24T00:26:26.036278Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 00aeac91-fc30-4e50-82c6-40509c8dfab5 correlation f567bca7-983b-475d-9dd5-c186301757b3 created: 2025-11-24T00:26:20.895332Z] Nov 24 00:26:26.036527 waagent[1894]: 2025-11-24T00:26:26.036508Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 24 00:26:26.036900 waagent[1894]: 2025-11-24T00:26:26.036879Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Nov 24 00:26:31.884031 containerd[1712]: time="2025-11-24T00:26:31.883990049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:26:32.242548 containerd[1712]: time="2025-11-24T00:26:32.242434374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:32.248057 containerd[1712]: time="2025-11-24T00:26:32.247969316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:26:32.248057 containerd[1712]: time="2025-11-24T00:26:32.248013296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:32.248318 kubelet[3176]: E1124 00:26:32.248272 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:32.248671 kubelet[3176]: E1124 00:26:32.248601 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:32.249331 kubelet[3176]: E1124 00:26:32.248769 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f52vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:32.250676 kubelet[3176]: E1124 00:26:32.250644 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:26:32.882633 containerd[1712]: time="2025-11-24T00:26:32.882583028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:26:33.233346 containerd[1712]: time="2025-11-24T00:26:33.230731319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:33.237012 containerd[1712]: time="2025-11-24T00:26:33.236921269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:26:33.237012 containerd[1712]: time="2025-11-24T00:26:33.236972088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:26:33.237275 kubelet[3176]: E1124 00:26:33.237228 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:26:33.237321 kubelet[3176]: E1124 00:26:33.237290 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:26:33.238268 kubelet[3176]: E1124 00:26:33.238228 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ab0198d2f3f240e9aba684dac3248824,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:33.240075 containerd[1712]: time="2025-11-24T00:26:33.240055461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:26:33.623595 containerd[1712]: time="2025-11-24T00:26:33.623552056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:33.627740 containerd[1712]: time="2025-11-24T00:26:33.627715355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:26:33.627826 containerd[1712]: time="2025-11-24T00:26:33.627785978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:26:33.628001 kubelet[3176]: E1124 00:26:33.627970 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:26:33.628303 kubelet[3176]: E1124 00:26:33.628016 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:26:33.628303 kubelet[3176]: E1124 00:26:33.628133 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:33.630230 kubelet[3176]: E1124 00:26:33.630190 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:26:35.882175 containerd[1712]: time="2025-11-24T00:26:35.881988298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:26:36.252187 containerd[1712]: time="2025-11-24T00:26:36.251829354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:36.255616 containerd[1712]: time="2025-11-24T00:26:36.255524482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:26:36.255616 containerd[1712]: time="2025-11-24T00:26:36.255592387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:36.255885 kubelet[3176]: E1124 00:26:36.255857 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:36.256658 kubelet[3176]: E1124 00:26:36.256190 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:26:36.256658 kubelet[3176]: E1124 00:26:36.256326 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnsv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:36.258177 kubelet[3176]: E1124 00:26:36.257969 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:26:38.882817 containerd[1712]: time="2025-11-24T00:26:38.882740254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:26:39.253308 containerd[1712]: time="2025-11-24T00:26:39.253187315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:39.256221 containerd[1712]: time="2025-11-24T00:26:39.256191634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:26:39.256327 containerd[1712]: time="2025-11-24T00:26:39.256262918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:26:39.256408 kubelet[3176]: E1124 00:26:39.256373 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:26:39.256727 kubelet[3176]: E1124 00:26:39.256413 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:26:39.256727 kubelet[3176]: E1124 00:26:39.256627 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:39.257336 containerd[1712]: time="2025-11-24T00:26:39.257136993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:26:39.258411 kubelet[3176]: E1124 00:26:39.258376 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:26:39.605132 containerd[1712]: time="2025-11-24T00:26:39.604994230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:39.607659 containerd[1712]: time="2025-11-24T00:26:39.607553577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:26:39.607659 containerd[1712]: time="2025-11-24T00:26:39.607637898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:26:39.608192 kubelet[3176]: E1124 00:26:39.607962 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:26:39.608192 kubelet[3176]: E1124 00:26:39.608014 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:26:39.608353 kubelet[3176]: E1124 00:26:39.608134 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:39.610468 containerd[1712]: time="2025-11-24T00:26:39.610440585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:26:39.955461 containerd[1712]: time="2025-11-24T00:26:39.955354805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:39.958267 containerd[1712]: time="2025-11-24T00:26:39.958218596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:26:39.958396 containerd[1712]: time="2025-11-24T00:26:39.958316547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:26:39.958495 kubelet[3176]: E1124 00:26:39.958467 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:26:39.958547 kubelet[3176]: E1124 00:26:39.958530 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:26:39.959080 kubelet[3176]: E1124 00:26:39.959038 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:39.960355 kubelet[3176]: E1124 00:26:39.960227 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:26:40.882838 containerd[1712]: time="2025-11-24T00:26:40.882548246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:26:41.241767 containerd[1712]: time="2025-11-24T00:26:41.241443003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:26:41.244379 containerd[1712]: time="2025-11-24T00:26:41.244273306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:26:41.244379 containerd[1712]: time="2025-11-24T00:26:41.244357547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:26:41.244678 kubelet[3176]: E1124 00:26:41.244640 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:26:41.245136 kubelet[3176]: E1124 00:26:41.244944 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:26:41.245600 kubelet[3176]: E1124 00:26:41.245551 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr7tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:26:41.246946 kubelet[3176]: E1124 00:26:41.246855 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:26:43.885408 kubelet[3176]: E1124 00:26:43.885310 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:26:48.881835 kubelet[3176]: E1124 00:26:48.881786 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:26:48.883010 kubelet[3176]: E1124 00:26:48.882598 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:26:50.883640 kubelet[3176]: E1124 00:26:50.883598 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:26:50.885047 kubelet[3176]: E1124 00:26:50.885010 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:26:51.414532 systemd[1]: Started sshd@7-10.200.0.20:22-10.200.16.10:36152.service - OpenSSH per-connection server daemon (10.200.16.10:36152). Nov 24 00:26:51.972360 sshd[5278]: Accepted publickey for core from 10.200.16.10 port 36152 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:26:51.973387 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:26:51.977271 systemd-logind[1686]: New session 10 of user core. Nov 24 00:26:51.986297 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:26:52.464376 sshd[5281]: Connection closed by 10.200.16.10 port 36152 Nov 24 00:26:52.465310 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Nov 24 00:26:52.471228 systemd-logind[1686]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:26:52.472101 systemd[1]: sshd@7-10.200.0.20:22-10.200.16.10:36152.service: Deactivated successfully. Nov 24 00:26:52.475738 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:26:52.481433 systemd-logind[1686]: Removed session 10. Nov 24 00:26:54.881809 kubelet[3176]: E1124 00:26:54.881655 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:26:57.566372 systemd[1]: Started sshd@8-10.200.0.20:22-10.200.16.10:36168.service - OpenSSH per-connection server daemon (10.200.16.10:36168). Nov 24 00:26:58.119947 sshd[5295]: Accepted publickey for core from 10.200.16.10 port 36168 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:26:58.121027 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:26:58.125098 systemd-logind[1686]: New session 11 of user core. Nov 24 00:26:58.132289 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:26:58.555455 sshd[5298]: Connection closed by 10.200.16.10 port 36168 Nov 24 00:26:58.557472 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Nov 24 00:26:58.560793 systemd-logind[1686]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:26:58.561545 systemd[1]: sshd@8-10.200.0.20:22-10.200.16.10:36168.service: Deactivated successfully. Nov 24 00:26:58.563842 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:26:58.566588 systemd-logind[1686]: Removed session 11. Nov 24 00:26:58.882427 kubelet[3176]: E1124 00:26:58.882040 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:26:59.883921 kubelet[3176]: E1124 00:26:59.883850 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:27:00.882739 kubelet[3176]: E1124 00:27:00.882646 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:27:02.883641 kubelet[3176]: E1124 00:27:02.883587 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:27:03.662406 systemd[1]: Started sshd@9-10.200.0.20:22-10.200.16.10:57160.service - OpenSSH per-connection server daemon (10.200.16.10:57160). Nov 24 00:27:04.218861 sshd[5312]: Accepted publickey for core from 10.200.16.10 port 57160 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:04.219857 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:04.223204 systemd-logind[1686]: New session 12 of user core. Nov 24 00:27:04.228269 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:27:04.648634 sshd[5315]: Connection closed by 10.200.16.10 port 57160 Nov 24 00:27:04.650849 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:04.654192 systemd[1]: sshd@9-10.200.0.20:22-10.200.16.10:57160.service: Deactivated successfully. Nov 24 00:27:04.656558 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:27:04.657339 systemd-logind[1686]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:27:04.659102 systemd-logind[1686]: Removed session 12. Nov 24 00:27:04.745008 systemd[1]: Started sshd@10-10.200.0.20:22-10.200.16.10:57162.service - OpenSSH per-connection server daemon (10.200.16.10:57162). Nov 24 00:27:04.881829 kubelet[3176]: E1124 00:27:04.881776 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:27:05.294632 sshd[5328]: Accepted publickey for core from 10.200.16.10 port 57162 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:05.295855 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:05.300118 systemd-logind[1686]: New session 13 of user core. Nov 24 00:27:05.307662 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:27:05.762743 sshd[5331]: Connection closed by 10.200.16.10 port 57162 Nov 24 00:27:05.763495 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:05.766885 systemd-logind[1686]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:27:05.767039 systemd[1]: sshd@10-10.200.0.20:22-10.200.16.10:57162.service: Deactivated successfully. Nov 24 00:27:05.768891 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:27:05.770249 systemd-logind[1686]: Removed session 13. Nov 24 00:27:05.859888 systemd[1]: Started sshd@11-10.200.0.20:22-10.200.16.10:57166.service - OpenSSH per-connection server daemon (10.200.16.10:57166). Nov 24 00:27:06.406172 sshd[5340]: Accepted publickey for core from 10.200.16.10 port 57166 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:06.408564 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:06.414199 systemd-logind[1686]: New session 14 of user core. Nov 24 00:27:06.419282 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:27:06.882637 kubelet[3176]: E1124 00:27:06.882601 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:27:06.897666 sshd[5345]: Connection closed by 10.200.16.10 port 57166 Nov 24 00:27:06.898469 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:06.903662 systemd[1]: sshd@11-10.200.0.20:22-10.200.16.10:57166.service: Deactivated successfully. Nov 24 00:27:06.904146 systemd-logind[1686]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:27:06.907088 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:27:06.909670 systemd-logind[1686]: Removed session 14. Nov 24 00:27:09.881757 kubelet[3176]: E1124 00:27:09.881715 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:27:11.996042 systemd[1]: Started sshd@12-10.200.0.20:22-10.200.16.10:33660.service - OpenSSH per-connection server daemon (10.200.16.10:33660). Nov 24 00:27:12.551514 sshd[5369]: Accepted publickey for core from 10.200.16.10 port 33660 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:12.552745 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:12.556207 systemd-logind[1686]: New session 15 of user core. Nov 24 00:27:12.562280 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:27:12.994227 sshd[5372]: Connection closed by 10.200.16.10 port 33660 Nov 24 00:27:12.994931 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:12.998652 systemd[1]: sshd@12-10.200.0.20:22-10.200.16.10:33660.service: Deactivated successfully. Nov 24 00:27:13.001783 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:27:13.004662 systemd-logind[1686]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:27:13.005919 systemd-logind[1686]: Removed session 15. Nov 24 00:27:13.882508 kubelet[3176]: E1124 00:27:13.882469 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:27:14.883066 containerd[1712]: time="2025-11-24T00:27:14.883027491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:27:15.235515 containerd[1712]: time="2025-11-24T00:27:15.235397607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:15.238059 containerd[1712]: time="2025-11-24T00:27:15.238019916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:27:15.238175 containerd[1712]: time="2025-11-24T00:27:15.238027133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:27:15.238273 kubelet[3176]: E1124 00:27:15.238247 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:27:15.238518 kubelet[3176]: E1124 00:27:15.238283 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:27:15.238760 kubelet[3176]: E1124 00:27:15.238708 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ab0198d2f3f240e9aba684dac3248824,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:15.240635 containerd[1712]: time="2025-11-24T00:27:15.240462384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:27:15.587477 containerd[1712]: time="2025-11-24T00:27:15.587442376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:15.590502 containerd[1712]: time="2025-11-24T00:27:15.590451802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:27:15.590595 containerd[1712]: time="2025-11-24T00:27:15.590546256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:27:15.590693 kubelet[3176]: E1124 00:27:15.590662 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:27:15.590745 kubelet[3176]: E1124 00:27:15.590710 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:27:15.590859 kubelet[3176]: E1124 00:27:15.590826 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-922mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5ccc86d94b-4njwq_calico-system(b71df8d3-9ea3-44ea-a925-922c7dfc69b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:15.592296 kubelet[3176]: E1124 00:27:15.592241 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:27:16.882137 kubelet[3176]: E1124 00:27:16.882038 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:27:17.883192 kubelet[3176]: E1124 00:27:17.882818 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:27:18.095367 systemd[1]: Started sshd@13-10.200.0.20:22-10.200.16.10:33674.service - OpenSSH per-connection server daemon (10.200.16.10:33674). Nov 24 00:27:18.649789 sshd[5386]: Accepted publickey for core from 10.200.16.10 port 33674 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:18.651133 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:18.655017 systemd-logind[1686]: New session 16 of user core. Nov 24 00:27:18.663322 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:27:19.116987 sshd[5389]: Connection closed by 10.200.16.10 port 33674 Nov 24 00:27:19.117476 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:19.120717 systemd[1]: sshd@13-10.200.0.20:22-10.200.16.10:33674.service: Deactivated successfully. Nov 24 00:27:19.122478 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:27:19.123305 systemd-logind[1686]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:27:19.124851 systemd-logind[1686]: Removed session 16. Nov 24 00:27:20.881311 kubelet[3176]: E1124 00:27:20.881248 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:27:21.881881 containerd[1712]: time="2025-11-24T00:27:21.881064044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:27:22.241420 containerd[1712]: time="2025-11-24T00:27:22.241196301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:22.243823 containerd[1712]: time="2025-11-24T00:27:22.243775694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:27:22.243916 containerd[1712]: time="2025-11-24T00:27:22.243777959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:27:22.244006 kubelet[3176]: E1124 00:27:22.243947 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:27:22.244344 kubelet[3176]: E1124 00:27:22.244015 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:27:22.244344 kubelet[3176]: E1124 00:27:22.244171 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f52vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-qmsbd_calico-apiserver(6fa98e89-de7e-4aff-a7d8-ed455ce756f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:22.246022 kubelet[3176]: E1124 00:27:22.245975 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:27:24.216261 systemd[1]: Started sshd@14-10.200.0.20:22-10.200.16.10:57332.service - OpenSSH per-connection server daemon (10.200.16.10:57332). Nov 24 00:27:24.777751 sshd[5441]: Accepted publickey for core from 10.200.16.10 port 57332 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:24.779863 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:24.785127 systemd-logind[1686]: New session 17 of user core. Nov 24 00:27:24.790307 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:27:25.213499 sshd[5445]: Connection closed by 10.200.16.10 port 57332 Nov 24 00:27:25.213964 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:25.216937 systemd[1]: sshd@14-10.200.0.20:22-10.200.16.10:57332.service: Deactivated successfully. Nov 24 00:27:25.218779 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:27:25.219761 systemd-logind[1686]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:27:25.220865 systemd-logind[1686]: Removed session 17. Nov 24 00:27:25.315947 systemd[1]: Started sshd@15-10.200.0.20:22-10.200.16.10:57348.service - OpenSSH per-connection server daemon (10.200.16.10:57348). Nov 24 00:27:25.876555 sshd[5457]: Accepted publickey for core from 10.200.16.10 port 57348 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:25.878250 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:25.883737 systemd-logind[1686]: New session 18 of user core. Nov 24 00:27:25.889305 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:27:26.398449 sshd[5460]: Connection closed by 10.200.16.10 port 57348 Nov 24 00:27:26.399224 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:26.405010 systemd[1]: sshd@15-10.200.0.20:22-10.200.16.10:57348.service: Deactivated successfully. Nov 24 00:27:26.407696 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:27:26.410131 systemd-logind[1686]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:27:26.411513 systemd-logind[1686]: Removed session 18. Nov 24 00:27:26.501355 systemd[1]: Started sshd@16-10.200.0.20:22-10.200.16.10:57354.service - OpenSSH per-connection server daemon (10.200.16.10:57354). Nov 24 00:27:27.048592 sshd[5470]: Accepted publickey for core from 10.200.16.10 port 57354 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:27.050336 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:27.055713 systemd-logind[1686]: New session 19 of user core. Nov 24 00:27:27.061299 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:27:27.903369 kubelet[3176]: E1124 00:27:27.903283 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:27:28.196445 sshd[5473]: Connection closed by 10.200.16.10 port 57354 Nov 24 00:27:28.197048 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:28.201673 systemd[1]: sshd@16-10.200.0.20:22-10.200.16.10:57354.service: Deactivated successfully. Nov 24 00:27:28.205634 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:27:28.207286 systemd-logind[1686]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:27:28.210732 systemd-logind[1686]: Removed session 19. Nov 24 00:27:28.303286 systemd[1]: Started sshd@17-10.200.0.20:22-10.200.16.10:57364.service - OpenSSH per-connection server daemon (10.200.16.10:57364). Nov 24 00:27:28.852175 sshd[5491]: Accepted publickey for core from 10.200.16.10 port 57364 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:28.852930 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:28.857221 systemd-logind[1686]: New session 20 of user core. Nov 24 00:27:28.864294 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:27:28.881361 containerd[1712]: time="2025-11-24T00:27:28.881334526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:27:29.221406 containerd[1712]: time="2025-11-24T00:27:29.221266143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:29.225830 containerd[1712]: time="2025-11-24T00:27:29.224300841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:27:29.225830 containerd[1712]: time="2025-11-24T00:27:29.225228542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:27:29.225961 kubelet[3176]: E1124 00:27:29.225387 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:27:29.225961 kubelet[3176]: E1124 00:27:29.225429 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:27:29.225961 kubelet[3176]: E1124 00:27:29.225895 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnsv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8bbff79b-r9q7n_calico-apiserver(75ee8f86-4798-48a9-84fa-9fab492c51e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:29.228168 kubelet[3176]: E1124 00:27:29.227036 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:27:29.456882 sshd[5501]: Connection closed by 10.200.16.10 port 57364 Nov 24 00:27:29.460350 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:29.464738 systemd-logind[1686]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:27:29.466538 systemd[1]: sshd@17-10.200.0.20:22-10.200.16.10:57364.service: Deactivated successfully. Nov 24 00:27:29.469821 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:27:29.471656 systemd-logind[1686]: Removed session 20. Nov 24 00:27:29.572377 systemd[1]: Started sshd@18-10.200.0.20:22-10.200.16.10:57370.service - OpenSSH per-connection server daemon (10.200.16.10:57370). Nov 24 00:27:29.882366 containerd[1712]: time="2025-11-24T00:27:29.882241624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:27:30.123187 sshd[5511]: Accepted publickey for core from 10.200.16.10 port 57370 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:30.124368 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:30.128760 systemd-logind[1686]: New session 21 of user core. Nov 24 00:27:30.130306 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:27:30.238374 containerd[1712]: time="2025-11-24T00:27:30.238293715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:30.240980 containerd[1712]: time="2025-11-24T00:27:30.240947982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:27:30.241070 containerd[1712]: time="2025-11-24T00:27:30.240997068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:27:30.241246 kubelet[3176]: E1124 00:27:30.241209 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:27:30.241499 kubelet[3176]: E1124 00:27:30.241274 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:27:30.241499 kubelet[3176]: E1124 00:27:30.241432 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfn4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-757ffb85c9-k5zc7_calico-system(2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:30.243233 kubelet[3176]: E1124 00:27:30.242913 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:27:30.556500 sshd[5514]: Connection closed by 10.200.16.10 port 57370 Nov 24 00:27:30.556954 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:30.561656 systemd[1]: sshd@18-10.200.0.20:22-10.200.16.10:57370.service: Deactivated successfully. Nov 24 00:27:30.563609 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:27:30.564557 systemd-logind[1686]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:27:30.568201 systemd-logind[1686]: Removed session 21. Nov 24 00:27:31.884694 containerd[1712]: time="2025-11-24T00:27:31.883777411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:27:32.220381 containerd[1712]: time="2025-11-24T00:27:32.219779211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:32.223265 containerd[1712]: time="2025-11-24T00:27:32.223173563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:27:32.223265 containerd[1712]: time="2025-11-24T00:27:32.223244495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:27:32.223485 kubelet[3176]: E1124 00:27:32.223443 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:27:32.224344 kubelet[3176]: E1124 00:27:32.223781 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:27:32.224422 containerd[1712]: time="2025-11-24T00:27:32.224187862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:27:32.224870 kubelet[3176]: E1124 00:27:32.224130 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr7tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-77x2q_calico-system(f3bcd41b-67c8-425f-834c-8c6ed20d39b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:32.226062 kubelet[3176]: E1124 00:27:32.226025 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:27:32.592846 containerd[1712]: time="2025-11-24T00:27:32.592681834Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:32.595456 containerd[1712]: time="2025-11-24T00:27:32.595339799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:27:32.595456 containerd[1712]: time="2025-11-24T00:27:32.595430529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:27:32.595826 kubelet[3176]: E1124 00:27:32.595758 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:27:32.595826 kubelet[3176]: E1124 00:27:32.595810 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:27:32.596356 kubelet[3176]: E1124 00:27:32.596309 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:32.598385 containerd[1712]: time="2025-11-24T00:27:32.598050982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:27:32.956315 containerd[1712]: time="2025-11-24T00:27:32.955952914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:27:32.961455 containerd[1712]: time="2025-11-24T00:27:32.961363864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:27:32.961559 containerd[1712]: time="2025-11-24T00:27:32.961460827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:27:32.962288 kubelet[3176]: E1124 00:27:32.962257 3176 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:27:32.962380 kubelet[3176]: E1124 00:27:32.962298 3176 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:27:32.962449 kubelet[3176]: E1124 00:27:32.962424 3176 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zdsr7_calico-system(405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:27:32.963609 kubelet[3176]: E1124 00:27:32.963582 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:27:34.881696 kubelet[3176]: E1124 00:27:34.881653 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:27:35.655203 systemd[1]: Started sshd@19-10.200.0.20:22-10.200.16.10:41758.service - OpenSSH per-connection server daemon (10.200.16.10:41758). Nov 24 00:27:36.210851 sshd[5530]: Accepted publickey for core from 10.200.16.10 port 41758 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:36.213615 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:36.222780 systemd-logind[1686]: New session 22 of user core. Nov 24 00:27:36.228283 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:27:36.667101 sshd[5533]: Connection closed by 10.200.16.10 port 41758 Nov 24 00:27:36.667599 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:36.670731 systemd[1]: sshd@19-10.200.0.20:22-10.200.16.10:41758.service: Deactivated successfully. Nov 24 00:27:36.672424 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:27:36.673268 systemd-logind[1686]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:27:36.674333 systemd-logind[1686]: Removed session 22. Nov 24 00:27:40.882476 kubelet[3176]: E1124 00:27:40.882428 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9" Nov 24 00:27:41.770257 systemd[1]: Started sshd@20-10.200.0.20:22-10.200.16.10:45576.service - OpenSSH per-connection server daemon (10.200.16.10:45576). Nov 24 00:27:41.885170 kubelet[3176]: E1124 00:27:41.883060 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:27:42.319059 sshd[5547]: Accepted publickey for core from 10.200.16.10 port 45576 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:42.320374 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:42.326418 systemd-logind[1686]: New session 23 of user core. Nov 24 00:27:42.333437 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:27:42.747578 sshd[5550]: Connection closed by 10.200.16.10 port 45576 Nov 24 00:27:42.748074 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:42.753709 systemd[1]: sshd@20-10.200.0.20:22-10.200.16.10:45576.service: Deactivated successfully. Nov 24 00:27:42.756621 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:27:42.758781 systemd-logind[1686]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:27:42.759688 systemd-logind[1686]: Removed session 23. Nov 24 00:27:44.881694 kubelet[3176]: E1124 00:27:44.881626 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:27:45.884736 kubelet[3176]: E1124 00:27:45.884546 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zdsr7" podUID="405d9e27-2783-4e51-8c7a-b9ed2ffdd4ae" Nov 24 00:27:46.882627 kubelet[3176]: E1124 00:27:46.882585 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-77x2q" podUID="f3bcd41b-67c8-425f-834c-8c6ed20d39b0" Nov 24 00:27:47.845120 systemd[1]: Started sshd@21-10.200.0.20:22-10.200.16.10:45580.service - OpenSSH per-connection server daemon (10.200.16.10:45580). Nov 24 00:27:48.389004 sshd[5562]: Accepted publickey for core from 10.200.16.10 port 45580 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:48.390367 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:48.394571 systemd-logind[1686]: New session 24 of user core. Nov 24 00:27:48.400467 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:27:48.854256 sshd[5565]: Connection closed by 10.200.16.10 port 45580 Nov 24 00:27:48.855326 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:48.858475 systemd-logind[1686]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:27:48.859079 systemd[1]: sshd@21-10.200.0.20:22-10.200.16.10:45580.service: Deactivated successfully. Nov 24 00:27:48.860721 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:27:48.862145 systemd-logind[1686]: Removed session 24. Nov 24 00:27:49.886176 kubelet[3176]: E1124 00:27:49.886127 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-qmsbd" podUID="6fa98e89-de7e-4aff-a7d8-ed455ce756f9" Nov 24 00:27:53.953355 systemd[1]: Started sshd@22-10.200.0.20:22-10.200.16.10:33126.service - OpenSSH per-connection server daemon (10.200.16.10:33126). Nov 24 00:27:54.503246 sshd[5601]: Accepted publickey for core from 10.200.16.10 port 33126 ssh2: RSA SHA256:bSxnX3P2/LZJNh7pdjjcTxxNjugazb6R1LIXEtO21pg Nov 24 00:27:54.504119 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:27:54.508771 systemd-logind[1686]: New session 25 of user core. Nov 24 00:27:54.513310 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:27:54.978241 sshd[5604]: Connection closed by 10.200.16.10 port 33126 Nov 24 00:27:54.981327 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Nov 24 00:27:54.986564 systemd[1]: sshd@22-10.200.0.20:22-10.200.16.10:33126.service: Deactivated successfully. Nov 24 00:27:54.988363 systemd-logind[1686]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:27:54.988889 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:27:54.993039 systemd-logind[1686]: Removed session 25. Nov 24 00:27:55.883077 kubelet[3176]: E1124 00:27:55.882544 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8bbff79b-r9q7n" podUID="75ee8f86-4798-48a9-84fa-9fab492c51e9" Nov 24 00:27:55.883077 kubelet[3176]: E1124 00:27:55.882833 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757ffb85c9-k5zc7" podUID="2e86d1f8-53a9-4c14-a8cb-ffab2ee1ecb6" Nov 24 00:27:55.883941 kubelet[3176]: E1124 00:27:55.883912 3176 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5ccc86d94b-4njwq" podUID="b71df8d3-9ea3-44ea-a925-922c7dfc69b9"