Nov 5 15:51:31.234814 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:51:31.234849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:51:31.234866 kernel: BIOS-provided physical RAM map: Nov 5 15:51:31.234874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 15:51:31.234881 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 5 15:51:31.234889 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 5 15:51:31.234899 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 5 15:51:31.234907 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 5 15:51:31.236134 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 5 15:51:31.236152 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 5 15:51:31.236161 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 5 15:51:31.236169 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 5 15:51:31.236176 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 5 15:51:31.236184 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 5 15:51:31.236198 kernel: NX (Execute Disable) protection: active Nov 5 15:51:31.236206 kernel: APIC: Static calls initialized Nov 5 15:51:31.236214 kernel: efi: EFI v2.7 by Microsoft Nov 5 15:51:31.236222 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f437518 RNG=0x3ffd2018 Nov 5 15:51:31.236231 kernel: random: crng init done Nov 5 15:51:31.236239 kernel: secureboot: Secure boot disabled Nov 5 15:51:31.236247 kernel: SMBIOS 3.1.0 present. Nov 5 15:51:31.236255 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 5 15:51:31.236263 kernel: DMI: Memory slots populated: 2/2 Nov 5 15:51:31.236273 kernel: Hypervisor detected: Microsoft Hyper-V Nov 5 15:51:31.236457 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 5 15:51:31.236465 kernel: Hyper-V: Nested features: 0x3e0101 Nov 5 15:51:31.236473 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 5 15:51:31.236482 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 5 15:51:31.236490 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 5 15:51:31.236498 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 5 15:51:31.236506 kernel: tsc: Detected 2300.000 MHz processor Nov 5 15:51:31.236514 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:51:31.236524 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:51:31.236535 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 5 15:51:31.236544 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 15:51:31.236553 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:51:31.236562 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 5 15:51:31.236571 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 5 15:51:31.236579 kernel: Using GB pages for direct mapping Nov 5 15:51:31.236588 kernel: ACPI: Early table checksum verification disabled Nov 5 15:51:31.236602 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 5 15:51:31.236611 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236620 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236629 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 5 15:51:31.236638 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 5 15:51:31.236649 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236658 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236667 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236676 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 5 15:51:31.236685 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 5 15:51:31.236694 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:51:31.236705 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 5 15:51:31.236713 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 5 15:51:31.236722 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 5 15:51:31.236731 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 5 15:51:31.236740 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 5 15:51:31.236749 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 5 15:51:31.236758 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 5 15:51:31.236769 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 5 15:51:31.236778 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 5 15:51:31.236787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 5 15:51:31.236796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 5 15:51:31.236805 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 5 15:51:31.236814 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 5 15:51:31.236823 kernel: Zone ranges: Nov 5 15:51:31.236834 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:51:31.236843 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 5 15:51:31.236852 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 5 15:51:31.236861 kernel: Device empty Nov 5 15:51:31.236870 kernel: Movable zone start for each node Nov 5 15:51:31.236879 kernel: Early memory node ranges Nov 5 15:51:31.236887 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 15:51:31.236896 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 5 15:51:31.236907 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 5 15:51:31.236916 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 5 15:51:31.236925 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 5 15:51:31.236934 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 5 15:51:31.236943 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:51:31.236952 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 15:51:31.236961 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 5 15:51:31.236972 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 5 15:51:31.236981 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 5 15:51:31.236990 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:51:31.236999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:51:31.237008 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:51:31.237017 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 5 15:51:31.237026 kernel: TSC deadline timer available Nov 5 15:51:31.237037 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:51:31.237046 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:51:31.237054 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:51:31.237063 kernel: CPU topo: Max. threads per core: 2 Nov 5 15:51:31.237071 kernel: CPU topo: Num. cores per package: 1 Nov 5 15:51:31.237080 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:51:31.237089 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:51:31.237100 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 5 15:51:31.237109 kernel: Booting paravirtualized kernel on Hyper-V Nov 5 15:51:31.237119 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:51:31.237128 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:51:31.237136 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:51:31.237145 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:51:31.237154 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:51:31.237165 kernel: Hyper-V: PV spinlocks enabled Nov 5 15:51:31.237174 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:51:31.237184 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:51:31.237193 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 5 15:51:31.237203 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:51:31.237212 kernel: Fallback order for Node 0: 0 Nov 5 15:51:31.237221 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 5 15:51:31.237231 kernel: Policy zone: Normal Nov 5 15:51:31.237240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:51:31.237249 kernel: software IO TLB: area num 2. Nov 5 15:51:31.237258 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:51:31.237267 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:51:31.237286 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:51:31.237296 kernel: Dynamic Preempt: voluntary Nov 5 15:51:31.237306 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:51:31.237316 kernel: rcu: RCU event tracing is enabled. Nov 5 15:51:31.237325 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:51:31.237341 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:51:31.237353 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:51:31.237363 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:51:31.237372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:51:31.237382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:51:31.237392 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:51:31.237403 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:51:31.237412 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:51:31.237422 kernel: Using NULL legacy PIC Nov 5 15:51:31.237431 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 5 15:51:31.237443 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:51:31.237452 kernel: Console: colour dummy device 80x25 Nov 5 15:51:31.237462 kernel: printk: legacy console [tty1] enabled Nov 5 15:51:31.237471 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:51:31.237481 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 5 15:51:31.237490 kernel: ACPI: Core revision 20240827 Nov 5 15:51:31.237499 kernel: Failed to register legacy timer interrupt Nov 5 15:51:31.237511 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:51:31.237520 kernel: x2apic enabled Nov 5 15:51:31.237530 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:51:31.237539 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 5 15:51:31.237549 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 5 15:51:31.237559 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 5 15:51:31.237568 kernel: Hyper-V: Using IPI hypercalls Nov 5 15:51:31.237579 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 5 15:51:31.237589 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 5 15:51:31.237598 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 5 15:51:31.237608 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 5 15:51:31.237618 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 5 15:51:31.237628 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 5 15:51:31.237637 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Nov 5 15:51:31.237649 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Nov 5 15:51:31.237659 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:51:31.237668 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 5 15:51:31.237677 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 5 15:51:31.237686 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:51:31.237696 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:51:31.237705 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:51:31.237714 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 5 15:51:31.237724 kernel: RETBleed: Vulnerable Nov 5 15:51:31.237735 kernel: Speculative Store Bypass: Vulnerable Nov 5 15:51:31.237744 kernel: active return thunk: its_return_thunk Nov 5 15:51:31.237752 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:51:31.237760 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:51:31.237772 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:51:31.237780 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:51:31.237789 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 5 15:51:31.237799 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 5 15:51:31.237808 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 5 15:51:31.237817 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 5 15:51:31.237828 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 5 15:51:31.237837 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 5 15:51:31.237845 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:51:31.237853 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 5 15:51:31.237865 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 5 15:51:31.237872 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 5 15:51:31.237880 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 5 15:51:31.237889 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 5 15:51:31.237898 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 5 15:51:31.237918 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 5 15:51:31.237930 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:51:31.237939 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:51:31.237948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:51:31.237957 kernel: landlock: Up and running. Nov 5 15:51:31.237965 kernel: SELinux: Initializing. Nov 5 15:51:31.237974 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:51:31.237984 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:51:31.237993 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 5 15:51:31.238002 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 5 15:51:31.238012 kernel: signal: max sigframe size: 11952 Nov 5 15:51:31.238023 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:51:31.238033 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:51:31.238108 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:51:31.238120 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:51:31.238149 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:51:31.238211 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:51:31.240141 kernel: .... node #0, CPUs: #1 Nov 5 15:51:31.240154 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:51:31.240168 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Nov 5 15:51:31.240179 kernel: Memory: 8099552K/8383228K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 277460K reserved, 0K cma-reserved) Nov 5 15:51:31.240189 kernel: devtmpfs: initialized Nov 5 15:51:31.240199 kernel: x86/mm: Memory block size: 128MB Nov 5 15:51:31.240209 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 5 15:51:31.240219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:51:31.240229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:51:31.240241 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:51:31.240250 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:51:31.240259 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:51:31.240269 kernel: audit: type=2000 audit(1762357885.030:1): state=initialized audit_enabled=0 res=1 Nov 5 15:51:31.240293 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:51:31.240303 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:51:31.240313 kernel: cpuidle: using governor menu Nov 5 15:51:31.240325 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:51:31.240334 kernel: dca service started, version 1.12.1 Nov 5 15:51:31.240344 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 5 15:51:31.240354 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 5 15:51:31.240363 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:51:31.240373 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:51:31.240383 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:51:31.240395 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:51:31.240405 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:51:31.240414 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:51:31.240424 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:51:31.240434 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:51:31.240443 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:51:31.240452 kernel: ACPI: Interpreter enabled Nov 5 15:51:31.240463 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:51:31.240473 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:51:31.240483 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:51:31.240493 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 5 15:51:31.240503 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 5 15:51:31.240512 kernel: iommu: Default domain type: Translated Nov 5 15:51:31.240521 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:51:31.240532 kernel: efivars: Registered efivars operations Nov 5 15:51:31.240541 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:51:31.240551 kernel: PCI: System does not support PCI Nov 5 15:51:31.240562 kernel: vgaarb: loaded Nov 5 15:51:31.240571 kernel: clocksource: Switched to clocksource tsc-early Nov 5 15:51:31.240581 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:51:31.240591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:51:31.240603 kernel: pnp: PnP ACPI init Nov 5 15:51:31.240612 kernel: pnp: PnP ACPI: found 3 devices Nov 5 15:51:31.240622 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:51:31.240633 kernel: NET: Registered PF_INET protocol family Nov 5 15:51:31.240644 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:51:31.240656 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 5 15:51:31.240667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:51:31.240681 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:51:31.240694 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 5 15:51:31.240705 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 5 15:51:31.240716 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 5 15:51:31.240726 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 5 15:51:31.240737 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:51:31.240749 kernel: NET: Registered PF_XDP protocol family Nov 5 15:51:31.240763 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:51:31.240774 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 5 15:51:31.240786 kernel: software IO TLB: mapped [mem 0x00000000366e7000-0x000000003a6e7000] (64MB) Nov 5 15:51:31.240796 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 5 15:51:31.240806 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 5 15:51:31.240817 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Nov 5 15:51:31.240829 kernel: clocksource: Switched to clocksource tsc Nov 5 15:51:31.240841 kernel: Initialise system trusted keyrings Nov 5 15:51:31.240852 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 5 15:51:31.240862 kernel: Key type asymmetric registered Nov 5 15:51:31.240873 kernel: Asymmetric key parser 'x509' registered Nov 5 15:51:31.240883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:51:31.240895 kernel: io scheduler mq-deadline registered Nov 5 15:51:31.240906 kernel: io scheduler kyber registered Nov 5 15:51:31.240919 kernel: io scheduler bfq registered Nov 5 15:51:31.240930 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:51:31.240940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:51:31.240951 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:51:31.240961 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 5 15:51:31.240973 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:51:31.240983 kernel: i8042: PNP: No PS/2 controller found. Nov 5 15:51:31.241191 kernel: rtc_cmos 00:02: registered as rtc0 Nov 5 15:51:31.241321 kernel: rtc_cmos 00:02: setting system clock to 2025-11-05T15:51:27 UTC (1762357887) Nov 5 15:51:31.241425 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 5 15:51:31.241437 kernel: intel_pstate: Intel P-state driver initializing Nov 5 15:51:31.241447 kernel: efifb: probing for efifb Nov 5 15:51:31.241457 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 5 15:51:31.241469 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 5 15:51:31.241479 kernel: efifb: scrolling: redraw Nov 5 15:51:31.241488 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 15:51:31.241498 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:51:31.241507 kernel: fb0: EFI VGA frame buffer device Nov 5 15:51:31.241517 kernel: pstore: Using crash dump compression: deflate Nov 5 15:51:31.241526 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 15:51:31.241536 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:51:31.241547 kernel: Segment Routing with IPv6 Nov 5 15:51:31.241557 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:51:31.241566 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:51:31.241576 kernel: Key type dns_resolver registered Nov 5 15:51:31.241585 kernel: IPI shorthand broadcast: enabled Nov 5 15:51:31.241595 kernel: sched_clock: Marking stable (1591004775, 108799756)->(2063086625, -363282094) Nov 5 15:51:31.241604 kernel: registered taskstats version 1 Nov 5 15:51:31.241616 kernel: Loading compiled-in X.509 certificates Nov 5 15:51:31.241626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:51:31.241635 kernel: Demotion targets for Node 0: null Nov 5 15:51:31.241645 kernel: Key type .fscrypt registered Nov 5 15:51:31.241654 kernel: Key type fscrypt-provisioning registered Nov 5 15:51:31.241663 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:51:31.241673 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:51:31.241684 kernel: ima: No architecture policies found Nov 5 15:51:31.241694 kernel: clk: Disabling unused clocks Nov 5 15:51:31.241704 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:51:31.241714 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:51:31.241723 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:51:31.241733 kernel: Run /init as init process Nov 5 15:51:31.241742 kernel: with arguments: Nov 5 15:51:31.241753 kernel: /init Nov 5 15:51:31.241762 kernel: with environment: Nov 5 15:51:31.241772 kernel: HOME=/ Nov 5 15:51:31.241781 kernel: TERM=linux Nov 5 15:51:31.241791 kernel: hv_vmbus: Vmbus version:5.3 Nov 5 15:51:31.241800 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 5 15:51:31.241810 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 5 15:51:31.241819 kernel: PTP clock support registered Nov 5 15:51:31.241830 kernel: hv_utils: Registering HyperV Utility Driver Nov 5 15:51:31.241840 kernel: hv_vmbus: registering driver hv_utils Nov 5 15:51:31.241849 kernel: hv_utils: Shutdown IC version 3.2 Nov 5 15:51:31.241859 kernel: hv_utils: Heartbeat IC version 3.0 Nov 5 15:51:31.241869 kernel: hv_utils: TimeSync IC version 4.0 Nov 5 15:51:31.241878 kernel: SCSI subsystem initialized Nov 5 15:51:31.241888 kernel: hv_vmbus: registering driver hv_pci Nov 5 15:51:31.242033 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 5 15:51:31.242148 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 5 15:51:31.242287 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 5 15:51:31.242404 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 5 15:51:31.242552 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 5 15:51:31.242683 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 5 15:51:31.242799 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 5 15:51:31.242924 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 5 15:51:31.242936 kernel: hv_vmbus: registering driver hv_storvsc Nov 5 15:51:31.243066 kernel: scsi host0: storvsc_host_t Nov 5 15:51:31.243205 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 5 15:51:31.243217 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 15:51:31.243227 kernel: hv_vmbus: registering driver hid_hyperv Nov 5 15:51:31.243237 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 5 15:51:31.243375 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 5 15:51:31.243389 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 5 15:51:31.243401 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 5 15:51:31.243511 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 5 15:51:31.243638 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 5 15:51:31.243731 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 15:51:31.243743 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 15:51:31.243867 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 5 15:51:31.243881 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:51:31.244000 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 5 15:51:31.244012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:51:31.244021 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:51:31.244031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:51:31.244041 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:51:31.244053 kernel: raid6: avx512x4 gen() 42084 MB/s Nov 5 15:51:31.244076 kernel: raid6: avx512x2 gen() 41781 MB/s Nov 5 15:51:31.244088 kernel: raid6: avx512x1 gen() 25000 MB/s Nov 5 15:51:31.244098 kernel: raid6: avx2x4 gen() 35349 MB/s Nov 5 15:51:31.244107 kernel: raid6: avx2x2 gen() 37696 MB/s Nov 5 15:51:31.244117 kernel: raid6: avx2x1 gen() 30977 MB/s Nov 5 15:51:31.244127 kernel: raid6: using algorithm avx512x4 gen() 42084 MB/s Nov 5 15:51:31.244137 kernel: raid6: .... xor() 7294 MB/s, rmw enabled Nov 5 15:51:31.244148 kernel: raid6: using avx512x2 recovery algorithm Nov 5 15:51:31.244158 kernel: xor: automatically using best checksumming function avx Nov 5 15:51:31.244168 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:51:31.244178 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (899) Nov 5 15:51:31.244188 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:51:31.244198 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:51:31.244208 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:51:31.244219 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:51:31.244229 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:51:31.244239 kernel: loop: module loaded Nov 5 15:51:31.244249 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:51:31.244328 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:51:31.244341 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:51:31.244357 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:51:31.244368 systemd[1]: Detected virtualization microsoft. Nov 5 15:51:31.244378 systemd[1]: Detected architecture x86-64. Nov 5 15:51:31.244389 systemd[1]: Running in initrd. Nov 5 15:51:31.244399 systemd[1]: No hostname configured, using default hostname. Nov 5 15:51:31.244409 systemd[1]: Hostname set to . Nov 5 15:51:31.244420 systemd[1]: Initializing machine ID from random generator. Nov 5 15:51:31.244431 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:51:31.244441 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:51:31.244451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:51:31.244462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:51:31.244473 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:51:31.244484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:51:31.244497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:51:31.244507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:51:31.244517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:51:31.244528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:51:31.244537 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:51:31.244547 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:51:31.244556 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:51:31.244566 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:51:31.244576 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:51:31.244587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:51:31.244599 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:51:31.244610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:51:31.244620 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:51:31.244631 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:51:31.244642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:51:31.244652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:51:31.244662 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:51:31.244672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:51:31.244681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:51:31.244690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:51:31.244700 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:51:31.244734 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:51:31.244744 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:51:31.244755 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:51:31.244765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:51:31.244773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:51:31.244784 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:51:31.244796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:51:31.244807 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:51:31.244817 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:51:31.244827 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:51:31.244838 kernel: Bridge firewalling registered Nov 5 15:51:31.244868 systemd-journald[1034]: Collecting audit messages is disabled. Nov 5 15:51:31.244896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:51:31.244907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:51:31.244919 systemd-journald[1034]: Journal started Nov 5 15:51:31.244945 systemd-journald[1034]: Runtime Journal (/run/log/journal/5e2c5b42c4e54f42a731827f0ef326cd) is 8M, max 158.6M, 150.6M free. Nov 5 15:51:31.230527 systemd-modules-load[1036]: Inserted module 'br_netfilter' Nov 5 15:51:31.249401 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:51:31.251643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:51:31.258422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:51:31.265022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:51:31.267399 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:51:31.268136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:51:31.288757 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:51:31.292486 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:51:31.297085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:51:31.306187 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:51:31.310624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:51:31.317556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:51:31.324352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:51:31.341581 dracut-cmdline[1070]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:51:31.487598 systemd-resolved[1074]: Positive Trust Anchors: Nov 5 15:51:31.489653 systemd-resolved[1074]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:51:31.493142 systemd-resolved[1074]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:51:31.497111 systemd-resolved[1074]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:51:31.573155 systemd-resolved[1074]: Defaulting to hostname 'linux'. Nov 5 15:51:31.574993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:51:31.581344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:51:31.629304 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:51:31.708305 kernel: iscsi: registered transport (tcp) Nov 5 15:51:31.760437 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:51:31.760578 kernel: QLogic iSCSI HBA Driver Nov 5 15:51:31.844987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:51:31.870275 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:51:31.874503 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:51:31.907896 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:51:31.912141 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:51:31.927410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:51:31.947364 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:51:31.954173 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:51:31.983541 systemd-udevd[1295]: Using default interface naming scheme 'v257'. Nov 5 15:51:31.997139 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:51:32.003619 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:51:32.030416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:51:32.043138 dracut-pre-trigger[1374]: rd.md=0: removing MD RAID activation Nov 5 15:51:32.054576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:51:32.074801 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:51:32.081570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:51:32.106149 systemd-networkd[1432]: lo: Link UP Nov 5 15:51:32.108214 systemd-networkd[1432]: lo: Gained carrier Nov 5 15:51:32.108651 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:51:32.111849 systemd[1]: Reached target network.target - Network. Nov 5 15:51:32.135513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:51:32.145426 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:51:32.217962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:51:32.219498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:51:32.227063 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:51:32.233551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:51:32.251431 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:51:32.274306 kernel: hv_vmbus: registering driver hv_netvsc Nov 5 15:51:32.284118 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523492bb (unnamed net_device) (uninitialized): VF slot 1 added Nov 5 15:51:32.303713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 15:51:32.305961 kernel: AES CTR mode by8 optimization enabled Nov 5 15:51:32.305897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:51:32.317788 systemd-networkd[1432]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:51:32.317795 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:51:32.320156 systemd-networkd[1432]: eth0: Link UP Nov 5 15:51:32.320758 systemd-networkd[1432]: eth0: Gained carrier Nov 5 15:51:32.320772 systemd-networkd[1432]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:51:32.348388 systemd-networkd[1432]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 5 15:51:32.465304 kernel: nvme nvme0: using unchecked data buffer Nov 5 15:51:32.578675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 5 15:51:32.588513 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:51:32.682767 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 5 15:51:32.706564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 5 15:51:32.721070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 5 15:51:32.817530 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:51:32.818049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:51:32.822600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:51:32.824623 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:51:32.839533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:51:32.899782 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:51:33.305986 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 5 15:51:33.306249 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 5 15:51:33.309338 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 5 15:51:33.311103 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 5 15:51:33.316296 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 5 15:51:33.320324 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 5 15:51:33.325346 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 5 15:51:33.327391 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 5 15:51:33.346750 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 5 15:51:33.346965 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 5 15:51:33.352305 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 5 15:51:33.364402 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 5 15:51:33.374304 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 5 15:51:33.378426 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523492bb eth0: VF registering: eth1 Nov 5 15:51:33.378613 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 5 15:51:33.383174 systemd-networkd[1432]: eth1: Interface name change detected, renamed to enP30832s1. Nov 5 15:51:33.385426 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 5 15:51:33.483317 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 5 15:51:33.488378 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 5 15:51:33.488660 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523492bb eth0: Data path switched to VF: enP30832s1 Nov 5 15:51:33.489535 systemd-networkd[1432]: enP30832s1: Link UP Nov 5 15:51:33.490793 systemd-networkd[1432]: enP30832s1: Gained carrier Nov 5 15:51:33.899881 disk-uuid[1598]: Warning: The kernel is still using the old partition table. Nov 5 15:51:33.899881 disk-uuid[1598]: The new table will be used at the next reboot or after you Nov 5 15:51:33.899881 disk-uuid[1598]: run partprobe(8) or kpartx(8) Nov 5 15:51:33.899881 disk-uuid[1598]: The operation has completed successfully. Nov 5 15:51:33.909222 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:51:33.909359 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:51:33.916113 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:51:33.963441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1644) Nov 5 15:51:33.963482 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:51:33.966640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:51:34.015466 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:51:34.015526 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 5 15:51:34.016654 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:51:34.023301 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:51:34.023603 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:51:34.028426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:51:34.104427 systemd-networkd[1432]: eth0: Gained IPv6LL Nov 5 15:51:34.992063 ignition[1663]: Ignition 2.22.0 Nov 5 15:51:34.992076 ignition[1663]: Stage: fetch-offline Nov 5 15:51:34.994769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:51:34.992197 ignition[1663]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:34.999531 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:51:34.992205 ignition[1663]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:34.992321 ignition[1663]: parsed url from cmdline: "" Nov 5 15:51:34.992324 ignition[1663]: no config URL provided Nov 5 15:51:34.992329 ignition[1663]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:51:34.992336 ignition[1663]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:51:34.992341 ignition[1663]: failed to fetch config: resource requires networking Nov 5 15:51:34.992498 ignition[1663]: Ignition finished successfully Nov 5 15:51:35.031798 ignition[1670]: Ignition 2.22.0 Nov 5 15:51:35.031809 ignition[1670]: Stage: fetch Nov 5 15:51:35.033455 ignition[1670]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:35.033471 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:35.033570 ignition[1670]: parsed url from cmdline: "" Nov 5 15:51:35.033574 ignition[1670]: no config URL provided Nov 5 15:51:35.033579 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:51:35.033584 ignition[1670]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:51:35.033607 ignition[1670]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 5 15:51:35.090084 ignition[1670]: GET result: OK Nov 5 15:51:35.090196 ignition[1670]: config has been read from IMDS userdata Nov 5 15:51:35.090228 ignition[1670]: parsing config with SHA512: 00b68916530f33aceba43bf88bb7d08ec9cd7116fe8ba49cc5a9fe2eee1ff8b6152463d7991a7ecd36ab3703f65b0f40a0b16cfa20e1b991a5024d0f1b7c4695 Nov 5 15:51:35.096845 unknown[1670]: fetched base config from "system" Nov 5 15:51:35.096856 unknown[1670]: fetched base config from "system" Nov 5 15:51:35.097267 ignition[1670]: fetch: fetch complete Nov 5 15:51:35.096861 unknown[1670]: fetched user config from "azure" Nov 5 15:51:35.097272 ignition[1670]: fetch: fetch passed Nov 5 15:51:35.101434 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:51:35.097338 ignition[1670]: Ignition finished successfully Nov 5 15:51:35.107195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:51:35.133263 ignition[1677]: Ignition 2.22.0 Nov 5 15:51:35.133274 ignition[1677]: Stage: kargs Nov 5 15:51:35.136310 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:51:35.133498 ignition[1677]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:35.140412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:51:35.133506 ignition[1677]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:35.134582 ignition[1677]: kargs: kargs passed Nov 5 15:51:35.134625 ignition[1677]: Ignition finished successfully Nov 5 15:51:35.163568 ignition[1683]: Ignition 2.22.0 Nov 5 15:51:35.163579 ignition[1683]: Stage: disks Nov 5 15:51:35.164792 ignition[1683]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:35.164801 ignition[1683]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:35.167931 ignition[1683]: disks: disks passed Nov 5 15:51:35.170138 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:51:35.167964 ignition[1683]: Ignition finished successfully Nov 5 15:51:35.176005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:51:35.180339 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:51:35.182555 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:51:35.182583 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:51:35.182800 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:51:35.183681 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:51:35.327015 systemd-fsck[1692]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Nov 5 15:51:35.331199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:51:35.338493 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:51:37.241646 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:51:37.242246 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:51:37.245206 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:51:37.291366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:51:37.310383 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:51:37.316384 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:51:37.322303 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1701) Nov 5 15:51:37.325539 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:51:37.336020 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:51:37.336047 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:51:37.325706 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:51:37.341823 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:51:37.341848 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 5 15:51:37.341861 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:51:37.331503 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:51:37.339651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:51:37.346680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:51:37.949294 coreos-metadata[1703]: Nov 05 15:51:37.949 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 15:51:37.956374 coreos-metadata[1703]: Nov 05 15:51:37.952 INFO Fetch successful Nov 5 15:51:37.956374 coreos-metadata[1703]: Nov 05 15:51:37.952 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 5 15:51:37.963333 coreos-metadata[1703]: Nov 05 15:51:37.962 INFO Fetch successful Nov 5 15:51:37.977414 coreos-metadata[1703]: Nov 05 15:51:37.977 INFO wrote hostname ci-4487.0.1-a-e6d953e7e7 to /sysroot/etc/hostname Nov 5 15:51:37.979365 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:51:38.272897 initrd-setup-root[1732]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:51:38.311372 initrd-setup-root[1739]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:51:38.330735 initrd-setup-root[1746]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:51:38.350360 initrd-setup-root[1753]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:51:39.647769 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:51:39.652301 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:51:39.658542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:51:39.686126 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:51:39.692292 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:51:39.705376 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:51:39.722532 ignition[1821]: INFO : Ignition 2.22.0 Nov 5 15:51:39.722532 ignition[1821]: INFO : Stage: mount Nov 5 15:51:39.730380 ignition[1821]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:39.730380 ignition[1821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:39.730380 ignition[1821]: INFO : mount: mount passed Nov 5 15:51:39.730380 ignition[1821]: INFO : Ignition finished successfully Nov 5 15:51:39.725010 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:51:39.728843 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:51:39.750816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:51:39.772295 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1834) Nov 5 15:51:39.775827 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:51:39.775860 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:51:39.782576 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:51:39.782609 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 5 15:51:39.784041 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:51:39.786047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:51:39.819189 ignition[1850]: INFO : Ignition 2.22.0 Nov 5 15:51:39.819189 ignition[1850]: INFO : Stage: files Nov 5 15:51:39.824483 ignition[1850]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:39.824483 ignition[1850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:39.824483 ignition[1850]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:51:39.835201 ignition[1850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:51:39.835201 ignition[1850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:51:39.999819 ignition[1850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:51:40.003419 ignition[1850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:51:40.003419 ignition[1850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:51:40.002604 unknown[1850]: wrote ssh authorized keys file for user: core Nov 5 15:51:40.061266 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:51:40.063972 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:51:40.106936 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:51:40.177009 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:51:40.180385 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:51:40.180385 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:51:40.180385 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:51:40.180385 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:51:40.180385 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:51:40.195431 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:51:40.195431 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:51:40.195431 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:51:40.206384 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:51:40.209502 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:51:40.209502 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:51:40.216053 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:51:40.222401 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:51:40.222401 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:51:40.570717 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:51:41.726150 ignition[1850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:51:41.726150 ignition[1850]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:51:41.754954 ignition[1850]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:51:41.766491 ignition[1850]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:51:41.766491 ignition[1850]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:51:41.773364 ignition[1850]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:51:41.773364 ignition[1850]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:51:41.773364 ignition[1850]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:51:41.773364 ignition[1850]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:51:41.773364 ignition[1850]: INFO : files: files passed Nov 5 15:51:41.773364 ignition[1850]: INFO : Ignition finished successfully Nov 5 15:51:41.773153 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:51:41.778173 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:51:41.792580 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:51:41.796621 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:51:41.796724 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:51:41.818158 initrd-setup-root-after-ignition[1883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:51:41.818158 initrd-setup-root-after-ignition[1883]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:51:41.824327 initrd-setup-root-after-ignition[1887]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:51:41.827885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:51:41.831663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:51:41.835410 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:51:41.874807 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:51:41.874905 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:51:41.878960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:51:41.883426 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:51:41.886950 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:51:41.887728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:51:41.907105 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:51:41.911446 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:51:41.935582 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:51:41.935853 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:51:41.939709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:51:41.947642 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:51:41.950428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:51:41.950553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:51:41.952324 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:51:41.952595 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:51:41.953232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:51:41.967441 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:51:41.972448 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:51:41.975541 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:51:41.980439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:51:41.980679 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:51:41.981033 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:51:41.981428 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:51:41.981753 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:51:41.982071 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:51:41.982208 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:51:42.001570 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:51:42.003480 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:51:42.010273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:51:42.011566 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:51:42.016650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:51:42.016770 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:51:42.042934 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:51:42.043072 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:51:42.048474 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:51:42.048596 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:51:42.051749 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:51:42.051878 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:51:42.069375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:51:42.073480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:51:42.081907 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:51:42.084187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:51:42.090399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:51:42.090512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:51:42.095752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:51:42.095886 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:51:42.112313 ignition[1907]: INFO : Ignition 2.22.0 Nov 5 15:51:42.112313 ignition[1907]: INFO : Stage: umount Nov 5 15:51:42.112313 ignition[1907]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:51:42.112313 ignition[1907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:51:42.112313 ignition[1907]: INFO : umount: umount passed Nov 5 15:51:42.112313 ignition[1907]: INFO : Ignition finished successfully Nov 5 15:51:42.108937 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:51:42.109037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:51:42.115595 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:51:42.115686 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:51:42.123368 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:51:42.123449 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:51:42.127392 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:51:42.127441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:51:42.129752 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:51:42.129786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:51:42.129941 systemd[1]: Stopped target network.target - Network. Nov 5 15:51:42.129970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:51:42.130003 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:51:42.130217 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:51:42.158385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:51:42.162319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:51:42.168336 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:51:42.170047 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:51:42.175517 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:51:42.176898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:51:42.177118 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:51:42.177150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:51:42.177392 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:51:42.177441 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:51:42.177548 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:51:42.177581 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:51:42.187550 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:51:42.188907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:51:42.200835 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:51:42.200954 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:51:42.206328 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:51:42.206426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:51:42.212181 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:51:42.214008 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:51:42.214043 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:51:42.216038 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:51:42.216606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:51:42.216655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:51:42.216722 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:51:42.216751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:51:42.217273 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:51:42.219365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:51:42.219743 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:51:42.238146 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:51:42.239697 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:51:42.245897 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:51:42.245966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:51:42.247526 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:51:42.247555 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:51:42.247710 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:51:42.247750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:51:42.274580 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:51:42.276025 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:51:42.280415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:51:42.280469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:51:42.289113 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:51:42.292375 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:51:42.296456 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:51:42.299371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:51:42.299417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:51:42.302377 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:51:42.302422 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:51:42.308774 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:51:42.308828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:51:42.317786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:51:42.317836 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:51:42.326017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:51:42.326105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:51:42.357092 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:51:42.364267 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:51:42.366342 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:51:42.368264 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:51:42.384376 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523492bb eth0: Data path switched from VF: enP30832s1 Nov 5 15:51:42.384584 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 5 15:51:42.368342 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:51:42.384242 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:51:42.384353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:51:42.391509 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:51:42.396413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:51:42.426976 systemd[1]: Switching root. Nov 5 15:51:42.531578 systemd-journald[1034]: Journal stopped Nov 5 15:51:50.282433 systemd-journald[1034]: Received SIGTERM from PID 1 (systemd). Nov 5 15:51:50.282473 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:51:50.282492 kernel: SELinux: policy capability open_perms=1 Nov 5 15:51:50.282503 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:51:50.282512 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:51:50.282522 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:51:50.282533 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:51:50.282545 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:51:50.282555 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:51:50.282564 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:51:50.282575 kernel: audit: type=1403 audit(1762357904.015:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:51:50.282586 systemd[1]: Successfully loaded SELinux policy in 189.493ms. Nov 5 15:51:50.282598 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.500ms. Nov 5 15:51:50.282612 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:51:50.282624 systemd[1]: Detected virtualization microsoft. Nov 5 15:51:50.282635 systemd[1]: Detected architecture x86-64. Nov 5 15:51:50.282646 systemd[1]: Detected first boot. Nov 5 15:51:50.282659 systemd[1]: Hostname set to . Nov 5 15:51:50.282671 systemd[1]: Initializing machine ID from random generator. Nov 5 15:51:50.282682 zram_generator::config[1950]: No configuration found. Nov 5 15:51:50.282693 kernel: Guest personality initialized and is inactive Nov 5 15:51:50.282703 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 5 15:51:50.282713 kernel: Initialized host personality Nov 5 15:51:50.282725 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:51:50.282736 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:51:50.282747 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:51:50.282757 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:51:50.282769 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:51:50.282780 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:51:50.283597 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:51:50.283618 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:51:50.283629 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:51:50.283641 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:51:50.283652 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:51:50.283664 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:51:50.283677 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:51:50.283689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:51:50.283700 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:51:50.283712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:51:50.283723 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:51:50.283737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:51:50.283751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:51:50.283762 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:51:50.283773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:51:50.283785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:51:50.283796 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:51:50.283808 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:51:50.283822 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:51:50.283833 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:51:50.283844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:51:50.283855 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:51:50.283867 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:51:50.283878 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:51:50.283889 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:51:50.283904 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:51:50.283916 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:51:50.283928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:51:50.283939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:51:50.284364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:51:50.284378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:51:50.284390 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:51:50.284401 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:51:50.284413 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:51:50.284426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:51:50.284444 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:51:50.284457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:51:50.284469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:51:50.284481 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:51:50.284493 systemd[1]: Reached target machines.target - Containers. Nov 5 15:51:50.284505 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:51:50.284516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:51:50.284529 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:51:50.284541 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:51:50.284552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:51:50.284638 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:51:50.284727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:51:50.284740 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:51:50.284819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:51:50.284833 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:51:50.284845 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:51:50.284918 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:51:50.284931 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:51:50.285003 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:51:50.285016 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:51:50.285090 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:51:50.285105 kernel: fuse: init (API version 7.41) Nov 5 15:51:50.285185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:51:50.285202 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:51:50.285216 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:51:50.285230 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:51:50.285244 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:51:50.285263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:51:50.285303 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:51:50.285319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:51:50.285366 systemd-journald[2033]: Collecting audit messages is disabled. Nov 5 15:51:50.285409 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:51:50.285423 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:51:50.285437 systemd-journald[2033]: Journal started Nov 5 15:51:50.285467 systemd-journald[2033]: Runtime Journal (/run/log/journal/b07bcec398ce40009f6cdad7be13fa5b) is 8M, max 158.6M, 150.6M free. Nov 5 15:51:49.796431 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:51:49.810848 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 15:51:49.811235 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:51:50.291295 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:51:50.293193 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:51:50.294836 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:51:50.297633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:51:50.301582 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:51:50.301730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:51:50.305539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:51:50.305688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:51:50.307610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:51:50.307769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:51:50.309982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:51:50.310204 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:51:50.314052 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:51:50.314172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:51:50.317014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:51:50.321749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:51:50.326672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:51:50.333271 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:51:50.342295 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:51:50.359410 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:51:50.364046 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:51:50.377373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:51:50.381389 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:51:50.381427 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:51:50.387299 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:51:50.390271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:51:50.424415 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:51:50.432442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:51:50.436401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:51:50.440397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:51:50.443008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:51:50.447723 kernel: ACPI: bus type drm_connector registered Nov 5 15:51:50.449382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:51:50.452719 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:51:50.457401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:51:50.461750 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:51:50.461910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:51:50.467821 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:51:50.470725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:51:50.473575 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:51:50.477480 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:51:50.496698 systemd-journald[2033]: Time spent on flushing to /var/log/journal/b07bcec398ce40009f6cdad7be13fa5b is 11.635ms for 971 entries. Nov 5 15:51:50.496698 systemd-journald[2033]: System Journal (/var/log/journal/b07bcec398ce40009f6cdad7be13fa5b) is 8M, max 2.2G, 2.2G free. Nov 5 15:51:50.576696 systemd-journald[2033]: Received client request to flush runtime journal. Nov 5 15:51:50.576735 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:51:50.520849 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:51:50.523122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:51:50.529423 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:51:50.577768 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:51:50.588698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:51:50.647302 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:51:50.667083 systemd-tmpfiles[2090]: ACLs are not supported, ignoring. Nov 5 15:51:50.667104 systemd-tmpfiles[2090]: ACLs are not supported, ignoring. Nov 5 15:51:50.671349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:51:50.677426 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:51:50.812367 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:51:51.050302 kernel: loop2: detected capacity change from 0 to 229808 Nov 5 15:51:51.099302 kernel: loop3: detected capacity change from 0 to 27752 Nov 5 15:51:51.375490 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:51:51.382433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:51:51.387407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:51:51.419030 systemd-tmpfiles[2112]: ACLs are not supported, ignoring. Nov 5 15:51:51.419050 systemd-tmpfiles[2112]: ACLs are not supported, ignoring. Nov 5 15:51:51.421537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:51:51.481388 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:51:51.485789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:51:51.515572 systemd-udevd[2116]: Using default interface naming scheme 'v257'. Nov 5 15:51:51.567510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:51:51.615343 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:51:51.678304 kernel: loop4: detected capacity change from 0 to 110984 Nov 5 15:51:51.733061 systemd-resolved[2111]: Positive Trust Anchors: Nov 5 15:51:51.733076 systemd-resolved[2111]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:51:51.733080 systemd-resolved[2111]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:51:51.733113 systemd-resolved[2111]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:51:51.860584 systemd-resolved[2111]: Using system hostname 'ci-4487.0.1-a-e6d953e7e7'. Nov 5 15:51:51.861705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:51:51.865410 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:51:52.239318 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 15:51:52.253313 kernel: loop6: detected capacity change from 0 to 229808 Nov 5 15:51:52.268297 kernel: loop7: detected capacity change from 0 to 27752 Nov 5 15:51:52.280353 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 15:51:52.291400 (sd-merge)[2126]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Nov 5 15:51:52.294238 (sd-merge)[2126]: Merged extensions into '/usr'. Nov 5 15:51:52.297776 systemd[1]: Reload requested from client PID 2089 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:51:52.297790 systemd[1]: Reloading... Nov 5 15:51:52.350326 zram_generator::config[2153]: No configuration found. Nov 5 15:51:52.536481 kernel: hv_vmbus: registering driver hyperv_fb Nov 5 15:51:52.549300 kernel: hv_vmbus: registering driver hv_balloon Nov 5 15:51:52.559315 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:51:52.563804 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 5 15:51:52.563864 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 5 15:51:52.566825 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 5 15:51:52.570152 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:51:52.574597 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:51:52.659745 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:51:52.660167 systemd[1]: Reloading finished in 362 ms. Nov 5 15:51:52.665302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 15:51:52.675805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:51:52.688302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:51:52.709625 systemd[1]: Starting ensure-sysext.service... Nov 5 15:51:52.716050 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:51:52.723620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:51:52.739191 systemd[1]: Reload requested from client PID 2264 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:51:52.739200 systemd[1]: Reloading... Nov 5 15:51:52.796756 systemd-tmpfiles[2266]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:51:52.797555 systemd-tmpfiles[2266]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:51:52.798696 systemd-tmpfiles[2266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:51:52.799614 systemd-tmpfiles[2266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:51:52.801474 systemd-tmpfiles[2266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:51:52.802600 systemd-tmpfiles[2266]: ACLs are not supported, ignoring. Nov 5 15:51:52.802690 systemd-tmpfiles[2266]: ACLs are not supported, ignoring. Nov 5 15:51:52.841302 zram_generator::config[2295]: No configuration found. Nov 5 15:51:52.902986 systemd-tmpfiles[2266]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:51:52.903103 systemd-tmpfiles[2266]: Skipping /boot Nov 5 15:51:52.918137 systemd-tmpfiles[2266]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:51:52.919409 systemd-tmpfiles[2266]: Skipping /boot Nov 5 15:51:53.053298 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 5 15:51:53.144465 systemd-networkd[2265]: lo: Link UP Nov 5 15:51:53.144478 systemd-networkd[2265]: lo: Gained carrier Nov 5 15:51:53.146022 systemd-networkd[2265]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:51:53.146034 systemd-networkd[2265]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:51:53.148360 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 5 15:51:53.150305 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 5 15:51:53.152295 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523492bb eth0: Data path switched to VF: enP30832s1 Nov 5 15:51:53.152163 systemd-networkd[2265]: enP30832s1: Link UP Nov 5 15:51:53.152271 systemd-networkd[2265]: eth0: Link UP Nov 5 15:51:53.152287 systemd-networkd[2265]: eth0: Gained carrier Nov 5 15:51:53.152657 systemd-networkd[2265]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:51:53.157551 systemd-networkd[2265]: enP30832s1: Gained carrier Nov 5 15:51:53.163317 systemd-networkd[2265]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 5 15:51:53.176928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 5 15:51:53.179197 systemd[1]: Reloading finished in 439 ms. Nov 5 15:51:53.190523 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:51:53.202375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:51:53.240266 systemd[1]: Finished ensure-sysext.service. Nov 5 15:51:53.245030 systemd[1]: Reached target network.target - Network. Nov 5 15:51:53.246571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:51:53.247455 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:51:53.252366 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:51:53.255335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:51:53.257854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:51:53.261275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:51:53.264722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:51:53.275401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:51:53.279106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:51:53.283508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:51:53.284735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:51:53.286502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:51:53.292674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:51:53.298096 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:51:53.303502 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:51:53.306424 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:51:53.310724 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:51:53.316540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:51:53.320373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:51:53.321384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:51:53.321548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:51:53.325733 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:51:53.326003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:51:53.328422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:51:53.329011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:51:53.335364 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:51:53.335529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:51:53.342731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:51:53.342801 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:51:53.352872 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:51:53.387532 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:51:53.401747 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:51:53.694839 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:51:53.737370 augenrules[2419]: No rules Nov 5 15:51:53.738386 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:51:53.738705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:51:54.264422 systemd-networkd[2265]: eth0: Gained IPv6LL Nov 5 15:51:54.266612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:51:54.266962 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:51:54.659143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:51:56.174032 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:51:56.178529 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:52:02.210233 ldconfig[2377]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:52:02.221471 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:52:02.225678 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:52:02.254941 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:52:02.256769 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:52:02.258385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:52:02.262373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:52:02.265337 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:52:02.266825 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:52:02.269394 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:52:02.271048 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:52:02.272661 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:52:02.272701 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:52:02.274065 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:52:02.291342 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:52:02.293953 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:52:02.298970 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:52:02.301381 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:52:02.303415 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:52:02.312786 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:52:02.314665 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:52:02.317250 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:52:02.319576 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:52:02.325367 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:52:02.328382 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:52:02.328409 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:52:02.364441 systemd[1]: Starting chronyd.service - NTP client/server... Nov 5 15:52:02.367318 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:52:02.376441 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:52:02.381518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:52:02.388414 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:52:02.393694 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:52:02.401139 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:52:02.403228 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:52:02.406692 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:52:02.409326 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 5 15:52:02.412397 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 5 15:52:02.414926 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 5 15:52:02.418441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:02.422252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:52:02.429970 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:52:02.435242 jq[2443]: false Nov 5 15:52:02.435619 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:52:02.441531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:52:02.452426 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:52:02.458662 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:52:02.461166 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:52:02.463531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:52:02.463988 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:52:02.468369 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:52:02.476693 KVP[2446]: KVP starting; pid is:2446 Nov 5 15:52:02.479633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:52:02.486461 kernel: hv_utils: KVP IC version 4.0 Nov 5 15:52:02.482324 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:52:02.485949 KVP[2446]: KVP LIC Version: 3.1 Nov 5 15:52:02.482524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:52:02.486963 chronyd[2435]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 5 15:52:02.496935 extend-filesystems[2444]: Found /dev/nvme0n1p6 Nov 5 15:52:02.503662 oslogin_cache_refresh[2445]: Refreshing passwd entry cache Nov 5 15:52:02.504677 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Refreshing passwd entry cache Nov 5 15:52:02.506653 jq[2459]: true Nov 5 15:52:02.505755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:52:02.506484 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:52:02.526450 jq[2484]: true Nov 5 15:52:02.526201 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:52:02.527557 extend-filesystems[2444]: Found /dev/nvme0n1p9 Nov 5 15:52:02.530709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:52:02.533764 extend-filesystems[2444]: Checking size of /dev/nvme0n1p9 Nov 5 15:52:02.535969 chronyd[2435]: Timezone right/UTC failed leap second check, ignoring Nov 5 15:52:02.536116 chronyd[2435]: Loaded seccomp filter (level 2) Nov 5 15:52:02.538575 systemd[1]: Started chronyd.service - NTP client/server. Nov 5 15:52:02.540666 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Failure getting users, quitting Nov 5 15:52:02.540664 oslogin_cache_refresh[2445]: Failure getting users, quitting Nov 5 15:52:02.540872 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:52:02.540872 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Refreshing group entry cache Nov 5 15:52:02.540681 oslogin_cache_refresh[2445]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:52:02.540727 oslogin_cache_refresh[2445]: Refreshing group entry cache Nov 5 15:52:02.550044 (ntainerd)[2489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:52:02.558508 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Failure getting groups, quitting Nov 5 15:52:02.558508 google_oslogin_nss_cache[2445]: oslogin_cache_refresh[2445]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:52:02.558505 oslogin_cache_refresh[2445]: Failure getting groups, quitting Nov 5 15:52:02.558515 oslogin_cache_refresh[2445]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:52:02.560920 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:52:02.561199 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:52:02.570883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:52:02.590269 update_engine[2458]: I20251105 15:52:02.590204 2458 main.cc:92] Flatcar Update Engine starting Nov 5 15:52:02.599014 extend-filesystems[2444]: Resized partition /dev/nvme0n1p9 Nov 5 15:52:02.616527 tar[2469]: linux-amd64/LICENSE Nov 5 15:52:02.620416 tar[2469]: linux-amd64/helm Nov 5 15:52:02.637962 extend-filesystems[2521]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:52:02.650305 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Nov 5 15:52:02.701170 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Nov 5 15:52:02.654897 systemd-logind[2457]: New seat seat0. Nov 5 15:52:02.704091 systemd-logind[2457]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 5 15:52:02.704394 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:52:02.744174 extend-filesystems[2521]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 15:52:02.744174 extend-filesystems[2521]: old_desc_blocks = 4, new_desc_blocks = 4 Nov 5 15:52:02.744174 extend-filesystems[2521]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Nov 5 15:52:02.722248 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:52:02.769951 extend-filesystems[2444]: Resized filesystem in /dev/nvme0n1p9 Nov 5 15:52:02.722487 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:52:02.779336 bash[2518]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:52:02.782781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:52:02.787711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:52:02.870458 dbus-daemon[2438]: [system] SELinux support is enabled Nov 5 15:52:02.870848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:52:02.878791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:52:02.878826 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:52:02.881773 update_engine[2458]: I20251105 15:52:02.881726 2458 update_check_scheduler.cc:74] Next update check in 5m38s Nov 5 15:52:02.883434 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:52:02.883570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:52:02.889809 dbus-daemon[2438]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 15:52:02.889999 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:52:02.903586 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:52:02.944495 sshd_keygen[2493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:52:02.972067 coreos-metadata[2437]: Nov 05 15:52:02.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 15:52:02.982768 coreos-metadata[2437]: Nov 05 15:52:02.981 INFO Fetch successful Nov 5 15:52:02.982768 coreos-metadata[2437]: Nov 05 15:52:02.981 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 5 15:52:02.985973 coreos-metadata[2437]: Nov 05 15:52:02.985 INFO Fetch successful Nov 5 15:52:02.986141 coreos-metadata[2437]: Nov 05 15:52:02.986 INFO Fetching http://168.63.129.16/machine/635eca0b-f52c-4736-9044-b9fbf4fe36a0/dbceeb46%2D22d9%2D427d%2D819d%2D241446893c61.%5Fci%2D4487.0.1%2Da%2De6d953e7e7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 5 15:52:02.988553 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:52:02.988859 coreos-metadata[2437]: Nov 05 15:52:02.988 INFO Fetch successful Nov 5 15:52:02.989008 coreos-metadata[2437]: Nov 05 15:52:02.988 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 5 15:52:02.994308 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:52:03.024184 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 5 15:52:03.026316 coreos-metadata[2437]: Nov 05 15:52:03.025 INFO Fetch successful Nov 5 15:52:03.067123 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:52:03.072256 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:52:03.082443 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:52:03.082643 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:52:03.088824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:52:03.115615 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 5 15:52:03.145428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:52:03.150547 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:52:03.156460 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:52:03.164927 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:52:03.191143 locksmithd[2548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:52:03.324011 tar[2469]: linux-amd64/README.md Nov 5 15:52:03.344479 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:52:03.704741 containerd[2489]: time="2025-11-05T15:52:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:52:03.706187 containerd[2489]: time="2025-11-05T15:52:03.705774794Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:52:03.716571 containerd[2489]: time="2025-11-05T15:52:03.716533547Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.899µs" Nov 5 15:52:03.716685 containerd[2489]: time="2025-11-05T15:52:03.716671467Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:52:03.716744 containerd[2489]: time="2025-11-05T15:52:03.716734879Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:52:03.716908 containerd[2489]: time="2025-11-05T15:52:03.716897739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:52:03.716954 containerd[2489]: time="2025-11-05T15:52:03.716946113Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:52:03.717005 containerd[2489]: time="2025-11-05T15:52:03.716997281Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717112 containerd[2489]: time="2025-11-05T15:52:03.717080679Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717112 containerd[2489]: time="2025-11-05T15:52:03.717097716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717308 containerd[2489]: time="2025-11-05T15:52:03.717265512Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717308 containerd[2489]: time="2025-11-05T15:52:03.717300124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717375 containerd[2489]: time="2025-11-05T15:52:03.717312861Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717375 containerd[2489]: time="2025-11-05T15:52:03.717320334Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:52:03.717425 containerd[2489]: time="2025-11-05T15:52:03.717380567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.717569595Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.717600121Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.717610250Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.717660178Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.717962291Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:52:03.718312 containerd[2489]: time="2025-11-05T15:52:03.718012804Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:52:03.731313 containerd[2489]: time="2025-11-05T15:52:03.731248232Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:52:03.731427 containerd[2489]: time="2025-11-05T15:52:03.731412631Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:52:03.731532 containerd[2489]: time="2025-11-05T15:52:03.731520543Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:52:03.731578 containerd[2489]: time="2025-11-05T15:52:03.731570173Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:52:03.731620 containerd[2489]: time="2025-11-05T15:52:03.731612159Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:52:03.731669 containerd[2489]: time="2025-11-05T15:52:03.731658619Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:52:03.731714 containerd[2489]: time="2025-11-05T15:52:03.731707109Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:52:03.731754 containerd[2489]: time="2025-11-05T15:52:03.731746388Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:52:03.731798 containerd[2489]: time="2025-11-05T15:52:03.731791175Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:52:03.731828 containerd[2489]: time="2025-11-05T15:52:03.731823407Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:52:03.731856 containerd[2489]: time="2025-11-05T15:52:03.731851256Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:52:03.731889 containerd[2489]: time="2025-11-05T15:52:03.731883230Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:52:03.732007 containerd[2489]: time="2025-11-05T15:52:03.732000878Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:52:03.732038 containerd[2489]: time="2025-11-05T15:52:03.732032819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:52:03.732072 containerd[2489]: time="2025-11-05T15:52:03.732067325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:52:03.732105 containerd[2489]: time="2025-11-05T15:52:03.732098117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:52:03.732151 containerd[2489]: time="2025-11-05T15:52:03.732143062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:52:03.732188 containerd[2489]: time="2025-11-05T15:52:03.732181703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:52:03.732232 containerd[2489]: time="2025-11-05T15:52:03.732224884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:52:03.732273 containerd[2489]: time="2025-11-05T15:52:03.732265947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:52:03.732340 containerd[2489]: time="2025-11-05T15:52:03.732332216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:52:03.732378 containerd[2489]: time="2025-11-05T15:52:03.732370108Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:52:03.732422 containerd[2489]: time="2025-11-05T15:52:03.732413214Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:52:03.732525 containerd[2489]: time="2025-11-05T15:52:03.732514821Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:52:03.732565 containerd[2489]: time="2025-11-05T15:52:03.732558645Z" level=info msg="Start snapshots syncer" Nov 5 15:52:03.732629 containerd[2489]: time="2025-11-05T15:52:03.732621223Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:52:03.732951 containerd[2489]: time="2025-11-05T15:52:03.732926379Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:52:03.733337 containerd[2489]: time="2025-11-05T15:52:03.733152145Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:52:03.733337 containerd[2489]: time="2025-11-05T15:52:03.733216390Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:52:03.733495 containerd[2489]: time="2025-11-05T15:52:03.733484159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:52:03.733562 containerd[2489]: time="2025-11-05T15:52:03.733554198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:52:03.733601 containerd[2489]: time="2025-11-05T15:52:03.733594302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:52:03.733652 containerd[2489]: time="2025-11-05T15:52:03.733645303Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:52:03.733776 containerd[2489]: time="2025-11-05T15:52:03.733711248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:52:03.733776 containerd[2489]: time="2025-11-05T15:52:03.733732268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:52:03.733776 containerd[2489]: time="2025-11-05T15:52:03.733745277Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:52:03.733865 containerd[2489]: time="2025-11-05T15:52:03.733857518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:52:03.733924 containerd[2489]: time="2025-11-05T15:52:03.733900362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:52:03.733958 containerd[2489]: time="2025-11-05T15:52:03.733914564Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:52:03.734028 containerd[2489]: time="2025-11-05T15:52:03.734009490Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:52:03.734102 containerd[2489]: time="2025-11-05T15:52:03.734091603Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734139998Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734150582Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734156997Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734163937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734171117Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734182997Z" level=info msg="runtime interface created" Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734186682Z" level=info msg="created NRI interface" Nov 5 15:52:03.734212 containerd[2489]: time="2025-11-05T15:52:03.734192443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:52:03.734359 containerd[2489]: time="2025-11-05T15:52:03.734312451Z" level=info msg="Connect containerd service" Nov 5 15:52:03.734359 containerd[2489]: time="2025-11-05T15:52:03.734353138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:52:03.737133 containerd[2489]: time="2025-11-05T15:52:03.737095470Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:52:03.947359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:03.960595 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:04.362899 containerd[2489]: time="2025-11-05T15:52:04.362795533Z" level=info msg="Start subscribing containerd event" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.362852975Z" level=info msg="Start recovering state" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363455233Z" level=info msg="Start event monitor" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363470382Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363485162Z" level=info msg="Start streaming server" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363499024Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363507404Z" level=info msg="runtime interface starting up..." Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363518435Z" level=info msg="starting plugins..." Nov 5 15:52:04.363711 containerd[2489]: time="2025-11-05T15:52:04.363531771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:52:04.365325 containerd[2489]: time="2025-11-05T15:52:04.365030966Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:52:04.365471 containerd[2489]: time="2025-11-05T15:52:04.365453622Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:52:04.366006 containerd[2489]: time="2025-11-05T15:52:04.365994835Z" level=info msg="containerd successfully booted in 0.661670s" Nov 5 15:52:04.366641 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:52:04.369788 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:52:04.373879 systemd[1]: Startup finished in 4.480s (kernel) + 13.688s (initrd) + 20.546s (userspace) = 38.715s. Nov 5 15:52:04.525294 kubelet[2605]: E1105 15:52:04.525209 2605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:04.527315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:04.527452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:04.527768 systemd[1]: kubelet.service: Consumed 980ms CPU time, 266.1M memory peak. Nov 5 15:52:05.425793 waagent[2573]: 2025-11-05T15:52:05.425706Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 5 15:52:05.427690 waagent[2573]: 2025-11-05T15:52:05.427635Z INFO Daemon Daemon OS: flatcar 4487.0.1 Nov 5 15:52:05.428947 waagent[2573]: 2025-11-05T15:52:05.428866Z INFO Daemon Daemon Python: 3.11.13 Nov 5 15:52:05.430215 waagent[2573]: 2025-11-05T15:52:05.430150Z INFO Daemon Daemon Run daemon Nov 5 15:52:05.431652 waagent[2573]: 2025-11-05T15:52:05.431500Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.1' Nov 5 15:52:05.433903 waagent[2573]: 2025-11-05T15:52:05.433863Z INFO Daemon Daemon Using waagent for provisioning Nov 5 15:52:05.435429 waagent[2573]: 2025-11-05T15:52:05.435392Z INFO Daemon Daemon Activate resource disk Nov 5 15:52:05.436646 waagent[2573]: 2025-11-05T15:52:05.436568Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 5 15:52:05.439768 waagent[2573]: 2025-11-05T15:52:05.439724Z INFO Daemon Daemon Found device: None Nov 5 15:52:05.441203 waagent[2573]: 2025-11-05T15:52:05.441170Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 5 15:52:05.443638 waagent[2573]: 2025-11-05T15:52:05.443546Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 5 15:52:05.445753 waagent[2573]: 2025-11-05T15:52:05.445715Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 15:52:05.445868 waagent[2573]: 2025-11-05T15:52:05.445842Z INFO Daemon Daemon Running default provisioning handler Nov 5 15:52:05.451717 waagent[2573]: 2025-11-05T15:52:05.451645Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 5 15:52:05.452352 waagent[2573]: 2025-11-05T15:52:05.452314Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 5 15:52:05.452448 waagent[2573]: 2025-11-05T15:52:05.452425Z INFO Daemon Daemon cloud-init is enabled: False Nov 5 15:52:05.452504 waagent[2573]: 2025-11-05T15:52:05.452487Z INFO Daemon Daemon Copying ovf-env.xml Nov 5 15:52:05.458701 login[2577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:52:05.474439 systemd-logind[2457]: New session 1 of user core. Nov 5 15:52:05.463660 login[2578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:52:05.475832 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:52:05.477479 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:52:05.483305 systemd-logind[2457]: New session 2 of user core. Nov 5 15:52:05.510931 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:52:05.513100 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:52:05.553693 (systemd)[2629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:52:05.555524 systemd-logind[2457]: New session c1 of user core. Nov 5 15:52:05.622935 waagent[2573]: 2025-11-05T15:52:05.622867Z INFO Daemon Daemon Successfully mounted dvd Nov 5 15:52:05.650678 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 5 15:52:05.651633 waagent[2573]: 2025-11-05T15:52:05.651586Z INFO Daemon Daemon Detect protocol endpoint Nov 5 15:52:05.653414 waagent[2573]: 2025-11-05T15:52:05.653363Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 15:52:05.653781 waagent[2573]: 2025-11-05T15:52:05.653579Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 5 15:52:05.654574 waagent[2573]: 2025-11-05T15:52:05.653794Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 5 15:52:05.654574 waagent[2573]: 2025-11-05T15:52:05.653962Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 5 15:52:05.654574 waagent[2573]: 2025-11-05T15:52:05.654166Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 5 15:52:05.690524 waagent[2573]: 2025-11-05T15:52:05.690454Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 5 15:52:05.695144 waagent[2573]: 2025-11-05T15:52:05.690747Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 5 15:52:05.695144 waagent[2573]: 2025-11-05T15:52:05.690903Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 5 15:52:05.801921 waagent[2573]: 2025-11-05T15:52:05.801837Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 5 15:52:05.806301 waagent[2573]: 2025-11-05T15:52:05.805249Z INFO Daemon Daemon Forcing an update of the goal state. Nov 5 15:52:05.810216 waagent[2573]: 2025-11-05T15:52:05.810169Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 15:52:05.830903 waagent[2573]: 2025-11-05T15:52:05.830869Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 5 15:52:05.833636 waagent[2573]: 2025-11-05T15:52:05.833601Z INFO Daemon Nov 5 15:52:05.835073 waagent[2573]: 2025-11-05T15:52:05.834978Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3f9a8c5c-db7c-47ba-a704-310d129502f2 eTag: 4452524753322198179 source: Fabric] Nov 5 15:52:05.839482 waagent[2573]: 2025-11-05T15:52:05.839448Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 5 15:52:05.842580 waagent[2573]: 2025-11-05T15:52:05.842550Z INFO Daemon Nov 5 15:52:05.844034 waagent[2573]: 2025-11-05T15:52:05.843945Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 5 15:52:05.851258 waagent[2573]: 2025-11-05T15:52:05.851230Z INFO Daemon Daemon Downloading artifacts profile blob Nov 5 15:52:05.905558 systemd[2629]: Queued start job for default target default.target. Nov 5 15:52:05.914508 systemd[2629]: Created slice app.slice - User Application Slice. Nov 5 15:52:05.914538 systemd[2629]: Reached target paths.target - Paths. Nov 5 15:52:05.915072 systemd[2629]: Reached target timers.target - Timers. Nov 5 15:52:05.917353 systemd[2629]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:52:05.928044 systemd[2629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:52:05.929071 systemd[2629]: Reached target sockets.target - Sockets. Nov 5 15:52:05.929190 systemd[2629]: Reached target basic.target - Basic System. Nov 5 15:52:05.929274 systemd[2629]: Reached target default.target - Main User Target. Nov 5 15:52:05.929379 systemd[2629]: Startup finished in 369ms. Nov 5 15:52:05.929455 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:52:05.936222 waagent[2573]: 2025-11-05T15:52:05.936173Z INFO Daemon Downloaded certificate {'thumbprint': '1B7748F2CC7C236496E4566E033FEA9B7F5611EB', 'hasPrivateKey': True} Nov 5 15:52:05.937633 waagent[2573]: 2025-11-05T15:52:05.936673Z INFO Daemon Fetch goal state completed Nov 5 15:52:05.940303 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:52:05.941141 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:52:05.955609 waagent[2573]: 2025-11-05T15:52:05.954889Z INFO Daemon Daemon Starting provisioning Nov 5 15:52:05.955609 waagent[2573]: 2025-11-05T15:52:05.955020Z INFO Daemon Daemon Handle ovf-env.xml. Nov 5 15:52:05.955609 waagent[2573]: 2025-11-05T15:52:05.955232Z INFO Daemon Daemon Set hostname [ci-4487.0.1-a-e6d953e7e7] Nov 5 15:52:05.973591 waagent[2573]: 2025-11-05T15:52:05.973550Z INFO Daemon Daemon Publish hostname [ci-4487.0.1-a-e6d953e7e7] Nov 5 15:52:05.975211 waagent[2573]: 2025-11-05T15:52:05.974846Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 5 15:52:05.975211 waagent[2573]: 2025-11-05T15:52:05.975120Z INFO Daemon Daemon Primary interface is [eth0] Nov 5 15:52:05.982679 systemd-networkd[2265]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:52:05.982687 systemd-networkd[2265]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:52:05.982742 systemd-networkd[2265]: eth0: DHCP lease lost Nov 5 15:52:06.006574 waagent[2573]: 2025-11-05T15:52:06.006519Z INFO Daemon Daemon Create user account if not exists Nov 5 15:52:06.009565 waagent[2573]: 2025-11-05T15:52:06.007250Z INFO Daemon Daemon User core already exists, skip useradd Nov 5 15:52:06.009565 waagent[2573]: 2025-11-05T15:52:06.007561Z INFO Daemon Daemon Configure sudoer Nov 5 15:52:06.012376 waagent[2573]: 2025-11-05T15:52:06.011358Z INFO Daemon Daemon Configure sshd Nov 5 15:52:06.012474 systemd-networkd[2265]: eth0: DHCPv4 address 10.200.8.46/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 5 15:52:06.031332 waagent[2573]: 2025-11-05T15:52:06.031249Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 5 15:52:06.032833 waagent[2573]: 2025-11-05T15:52:06.032562Z INFO Daemon Daemon Deploy ssh public key. Nov 5 15:52:07.161979 waagent[2573]: 2025-11-05T15:52:07.161927Z INFO Daemon Daemon Provisioning complete Nov 5 15:52:07.184093 waagent[2573]: 2025-11-05T15:52:07.184057Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 5 15:52:07.184469 waagent[2573]: 2025-11-05T15:52:07.184283Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 5 15:52:07.185152 waagent[2573]: 2025-11-05T15:52:07.184507Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 5 15:52:07.292173 waagent[2672]: 2025-11-05T15:52:07.292104Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 5 15:52:07.292522 waagent[2672]: 2025-11-05T15:52:07.292213Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.1 Nov 5 15:52:07.292522 waagent[2672]: 2025-11-05T15:52:07.292254Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 5 15:52:07.292522 waagent[2672]: 2025-11-05T15:52:07.292314Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 5 15:52:07.360941 waagent[2672]: 2025-11-05T15:52:07.360878Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 5 15:52:07.361089 waagent[2672]: 2025-11-05T15:52:07.361063Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:52:07.361147 waagent[2672]: 2025-11-05T15:52:07.361118Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:52:07.365965 waagent[2672]: 2025-11-05T15:52:07.365908Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 15:52:07.371822 waagent[2672]: 2025-11-05T15:52:07.371782Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 5 15:52:07.372179 waagent[2672]: 2025-11-05T15:52:07.372145Z INFO ExtHandler Nov 5 15:52:07.372221 waagent[2672]: 2025-11-05T15:52:07.372201Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a925ef84-7361-426c-a406-eb2308ebce38 eTag: 4452524753322198179 source: Fabric] Nov 5 15:52:07.372450 waagent[2672]: 2025-11-05T15:52:07.372423Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 5 15:52:07.372793 waagent[2672]: 2025-11-05T15:52:07.372769Z INFO ExtHandler Nov 5 15:52:07.372838 waagent[2672]: 2025-11-05T15:52:07.372811Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 5 15:52:07.376370 waagent[2672]: 2025-11-05T15:52:07.376343Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 5 15:52:07.461671 waagent[2672]: 2025-11-05T15:52:07.461586Z INFO ExtHandler Downloaded certificate {'thumbprint': '1B7748F2CC7C236496E4566E033FEA9B7F5611EB', 'hasPrivateKey': True} Nov 5 15:52:07.461983 waagent[2672]: 2025-11-05T15:52:07.461952Z INFO ExtHandler Fetch goal state completed Nov 5 15:52:07.473119 waagent[2672]: 2025-11-05T15:52:07.473072Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 5 15:52:07.477156 waagent[2672]: 2025-11-05T15:52:07.477102Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2672 Nov 5 15:52:07.477269 waagent[2672]: 2025-11-05T15:52:07.477228Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 5 15:52:07.477549 waagent[2672]: 2025-11-05T15:52:07.477523Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 5 15:52:07.478587 waagent[2672]: 2025-11-05T15:52:07.478553Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 5 15:52:07.478890 waagent[2672]: 2025-11-05T15:52:07.478863Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 5 15:52:07.478998 waagent[2672]: 2025-11-05T15:52:07.478976Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 5 15:52:07.479430 waagent[2672]: 2025-11-05T15:52:07.479400Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 5 15:52:07.557895 waagent[2672]: 2025-11-05T15:52:07.557861Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 5 15:52:07.558043 waagent[2672]: 2025-11-05T15:52:07.558019Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 5 15:52:07.563702 waagent[2672]: 2025-11-05T15:52:07.563562Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 5 15:52:07.569018 systemd[1]: Reload requested from client PID 2687 ('systemctl') (unit waagent.service)... Nov 5 15:52:07.569042 systemd[1]: Reloading... Nov 5 15:52:07.653363 zram_generator::config[2728]: No configuration found. Nov 5 15:52:07.713300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 5 15:52:07.838751 systemd[1]: Reloading finished in 269 ms. Nov 5 15:52:07.852997 waagent[2672]: 2025-11-05T15:52:07.852925Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 5 15:52:07.853102 waagent[2672]: 2025-11-05T15:52:07.853078Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 5 15:52:08.520588 waagent[2672]: 2025-11-05T15:52:08.520518Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 5 15:52:08.520899 waagent[2672]: 2025-11-05T15:52:08.520866Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 5 15:52:08.521563 waagent[2672]: 2025-11-05T15:52:08.521526Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 5 15:52:08.521893 waagent[2672]: 2025-11-05T15:52:08.521864Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 5 15:52:08.522055 waagent[2672]: 2025-11-05T15:52:08.522018Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:52:08.522205 waagent[2672]: 2025-11-05T15:52:08.522174Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:52:08.522431 waagent[2672]: 2025-11-05T15:52:08.522408Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 5 15:52:08.522526 waagent[2672]: 2025-11-05T15:52:08.522487Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:52:08.522634 waagent[2672]: 2025-11-05T15:52:08.522611Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 5 15:52:08.522697 waagent[2672]: 2025-11-05T15:52:08.522669Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 5 15:52:08.522697 waagent[2672]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 5 15:52:08.522697 waagent[2672]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 5 15:52:08.522697 waagent[2672]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 5 15:52:08.522697 waagent[2672]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:52:08.522697 waagent[2672]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:52:08.522697 waagent[2672]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:52:08.522938 waagent[2672]: 2025-11-05T15:52:08.522729Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 5 15:52:08.523240 waagent[2672]: 2025-11-05T15:52:08.523073Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:52:08.523240 waagent[2672]: 2025-11-05T15:52:08.523208Z INFO EnvHandler ExtHandler Configure routes Nov 5 15:52:08.523522 waagent[2672]: 2025-11-05T15:52:08.523490Z INFO EnvHandler ExtHandler Gateway:None Nov 5 15:52:08.523564 waagent[2672]: 2025-11-05T15:52:08.523539Z INFO EnvHandler ExtHandler Routes:None Nov 5 15:52:08.523851 waagent[2672]: 2025-11-05T15:52:08.523828Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 5 15:52:08.523908 waagent[2672]: 2025-11-05T15:52:08.523888Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 5 15:52:08.523996 waagent[2672]: 2025-11-05T15:52:08.523969Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 5 15:52:08.529527 waagent[2672]: 2025-11-05T15:52:08.529497Z INFO ExtHandler ExtHandler Nov 5 15:52:08.529636 waagent[2672]: 2025-11-05T15:52:08.529624Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f6b30049-e267-4d4d-b47e-2a5736945efd correlation 173c3545-3b95-4ee8-ab7f-d8742ad0b243 created: 2025-11-05T15:50:58.218636Z] Nov 5 15:52:08.529889 waagent[2672]: 2025-11-05T15:52:08.529877Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 5 15:52:08.530265 waagent[2672]: 2025-11-05T15:52:08.530250Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 5 15:52:08.561668 waagent[2672]: 2025-11-05T15:52:08.561626Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 5 15:52:08.561668 waagent[2672]: Try `iptables -h' or 'iptables --help' for more information.) Nov 5 15:52:08.562108 waagent[2672]: 2025-11-05T15:52:08.562086Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6A032C26-33E5-4146-9481-90DFED02CB7C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 5 15:52:08.605953 waagent[2672]: 2025-11-05T15:52:08.605899Z INFO MonitorHandler ExtHandler Network interfaces: Nov 5 15:52:08.605953 waagent[2672]: Executing ['ip', '-a', '-o', 'link']: Nov 5 15:52:08.605953 waagent[2672]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 5 15:52:08.605953 waagent[2672]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:92:bb brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e523492bb Nov 5 15:52:08.605953 waagent[2672]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:92:bb brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 5 15:52:08.605953 waagent[2672]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 5 15:52:08.605953 waagent[2672]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 5 15:52:08.605953 waagent[2672]: 2: eth0 inet 10.200.8.46/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 5 15:52:08.605953 waagent[2672]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 5 15:52:08.605953 waagent[2672]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 5 15:52:08.605953 waagent[2672]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:92bb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 5 15:52:08.694703 waagent[2672]: 2025-11-05T15:52:08.694650Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 5 15:52:08.694703 waagent[2672]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:52:08.694703 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.694703 waagent[2672]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:52:08.694703 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.694703 waagent[2672]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Nov 5 15:52:08.694703 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.694703 waagent[2672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 15:52:08.694703 waagent[2672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 15:52:08.694703 waagent[2672]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 15:52:08.697381 waagent[2672]: 2025-11-05T15:52:08.697327Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 5 15:52:08.697381 waagent[2672]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:52:08.697381 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.697381 waagent[2672]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:52:08.697381 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.697381 waagent[2672]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Nov 5 15:52:08.697381 waagent[2672]: pkts bytes target prot opt in out source destination Nov 5 15:52:08.697381 waagent[2672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 15:52:08.697381 waagent[2672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 15:52:08.697381 waagent[2672]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 15:52:14.471732 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:52:14.472791 systemd[1]: Started sshd@0-10.200.8.46:22-10.200.16.10:40344.service - OpenSSH per-connection server daemon (10.200.16.10:40344). Nov 5 15:52:14.579463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:52:14.580718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:15.503296 sshd[2818]: Accepted publickey for core from 10.200.16.10 port 40344 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:15.504405 sshd-session[2818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:15.508501 systemd-logind[2457]: New session 3 of user core. Nov 5 15:52:15.525426 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:52:16.120147 systemd[1]: Started sshd@1-10.200.8.46:22-10.200.16.10:40350.service - OpenSSH per-connection server daemon (10.200.16.10:40350). Nov 5 15:52:17.009374 sshd[2827]: Accepted publickey for core from 10.200.16.10 port 40350 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:17.010163 sshd-session[2827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:17.015455 systemd-logind[2457]: New session 4 of user core. Nov 5 15:52:17.027483 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:52:17.036895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:17.040381 (kubelet)[2836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:17.091645 kubelet[2836]: E1105 15:52:17.091593 2836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:17.094804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:17.094949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:17.095313 systemd[1]: kubelet.service: Consumed 141ms CPU time, 111.1M memory peak. Nov 5 15:52:17.417429 sshd[2834]: Connection closed by 10.200.16.10 port 40350 Nov 5 15:52:17.418002 sshd-session[2827]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:17.421125 systemd[1]: sshd@1-10.200.8.46:22-10.200.16.10:40350.service: Deactivated successfully. Nov 5 15:52:17.422676 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:52:17.424539 systemd-logind[2457]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:52:17.425243 systemd-logind[2457]: Removed session 4. Nov 5 15:52:17.545203 systemd[1]: Started sshd@2-10.200.8.46:22-10.200.16.10:40352.service - OpenSSH per-connection server daemon (10.200.16.10:40352). Nov 5 15:52:18.254225 sshd[2848]: Accepted publickey for core from 10.200.16.10 port 40352 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:18.255330 sshd-session[2848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:18.259990 systemd-logind[2457]: New session 5 of user core. Nov 5 15:52:18.266444 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:52:18.746499 sshd[2851]: Connection closed by 10.200.16.10 port 40352 Nov 5 15:52:18.747059 sshd-session[2848]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:18.750076 systemd[1]: sshd@2-10.200.8.46:22-10.200.16.10:40352.service: Deactivated successfully. Nov 5 15:52:18.751623 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:52:18.752899 systemd-logind[2457]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:52:18.753952 systemd-logind[2457]: Removed session 5. Nov 5 15:52:18.880977 systemd[1]: Started sshd@3-10.200.8.46:22-10.200.16.10:40362.service - OpenSSH per-connection server daemon (10.200.16.10:40362). Nov 5 15:52:19.595495 sshd[2857]: Accepted publickey for core from 10.200.16.10 port 40362 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:19.596591 sshd-session[2857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:19.600984 systemd-logind[2457]: New session 6 of user core. Nov 5 15:52:19.603425 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:52:20.089067 sshd[2860]: Connection closed by 10.200.16.10 port 40362 Nov 5 15:52:20.089602 sshd-session[2857]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:20.093094 systemd[1]: sshd@3-10.200.8.46:22-10.200.16.10:40362.service: Deactivated successfully. Nov 5 15:52:20.094607 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:52:20.095455 systemd-logind[2457]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:52:20.096496 systemd-logind[2457]: Removed session 6. Nov 5 15:52:20.216008 systemd[1]: Started sshd@4-10.200.8.46:22-10.200.16.10:39568.service - OpenSSH per-connection server daemon (10.200.16.10:39568). Nov 5 15:52:20.921002 sshd[2866]: Accepted publickey for core from 10.200.16.10 port 39568 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:20.922097 sshd-session[2866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:20.926431 systemd-logind[2457]: New session 7 of user core. Nov 5 15:52:20.935421 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:52:21.474514 sudo[2870]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:52:21.474737 sudo[2870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:21.504055 sudo[2870]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:21.618408 sshd[2869]: Connection closed by 10.200.16.10 port 39568 Nov 5 15:52:21.619076 sshd-session[2866]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:21.622982 systemd[1]: sshd@4-10.200.8.46:22-10.200.16.10:39568.service: Deactivated successfully. Nov 5 15:52:21.624606 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:52:21.625297 systemd-logind[2457]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:52:21.626542 systemd-logind[2457]: Removed session 7. Nov 5 15:52:21.743068 systemd[1]: Started sshd@5-10.200.8.46:22-10.200.16.10:39582.service - OpenSSH per-connection server daemon (10.200.16.10:39582). Nov 5 15:52:22.450054 sshd[2876]: Accepted publickey for core from 10.200.16.10 port 39582 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:22.451189 sshd-session[2876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:22.455594 systemd-logind[2457]: New session 8 of user core. Nov 5 15:52:22.462424 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:52:22.845492 sudo[2881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:52:22.845724 sudo[2881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:22.852058 sudo[2881]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:22.856886 sudo[2880]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:52:22.857101 sudo[2880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:22.865112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:52:22.896634 augenrules[2903]: No rules Nov 5 15:52:22.897615 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:52:22.897821 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:52:22.898825 sudo[2880]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:23.013819 sshd[2879]: Connection closed by 10.200.16.10 port 39582 Nov 5 15:52:23.014372 sshd-session[2876]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:23.017518 systemd[1]: sshd@5-10.200.8.46:22-10.200.16.10:39582.service: Deactivated successfully. Nov 5 15:52:23.019047 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:52:23.020827 systemd-logind[2457]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:52:23.021623 systemd-logind[2457]: Removed session 8. Nov 5 15:52:23.136878 systemd[1]: Started sshd@6-10.200.8.46:22-10.200.16.10:39592.service - OpenSSH per-connection server daemon (10.200.16.10:39592). Nov 5 15:52:23.849438 sshd[2912]: Accepted publickey for core from 10.200.16.10 port 39592 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:52:23.850524 sshd-session[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:23.854372 systemd-logind[2457]: New session 9 of user core. Nov 5 15:52:23.861430 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:52:24.231786 sudo[2916]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:52:24.232012 sudo[2916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:26.319944 chronyd[2435]: Selected source PHC0 Nov 5 15:52:26.422137 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:52:26.435563 (dockerd)[2934]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:52:27.329557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:52:27.331072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:27.608359 dockerd[2934]: time="2025-11-05T15:52:27.608226149Z" level=info msg="Starting up" Nov 5 15:52:27.611783 dockerd[2934]: time="2025-11-05T15:52:27.611752490Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:52:27.620947 dockerd[2934]: time="2025-11-05T15:52:27.620915802Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:52:30.992310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:30.997562 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:31.031908 kubelet[2961]: E1105 15:52:31.031871 2961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:31.033702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:31.033842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:31.034147 systemd[1]: kubelet.service: Consumed 129ms CPU time, 108.4M memory peak. Nov 5 15:52:32.170542 systemd[1]: var-lib-docker-metacopy\x2dcheck2130272290-merged.mount: Deactivated successfully. Nov 5 15:52:32.279720 dockerd[2934]: time="2025-11-05T15:52:32.279651389Z" level=info msg="Loading containers: start." Nov 5 15:52:32.403328 kernel: Initializing XFRM netlink socket Nov 5 15:52:32.994460 systemd-networkd[2265]: docker0: Link UP Nov 5 15:52:33.010222 dockerd[2934]: time="2025-11-05T15:52:33.010186392Z" level=info msg="Loading containers: done." Nov 5 15:52:33.410253 dockerd[2934]: time="2025-11-05T15:52:33.410189414Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:52:33.410690 dockerd[2934]: time="2025-11-05T15:52:33.410325204Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:52:33.410690 dockerd[2934]: time="2025-11-05T15:52:33.410422018Z" level=info msg="Initializing buildkit" Nov 5 15:52:33.559184 dockerd[2934]: time="2025-11-05T15:52:33.559141716Z" level=info msg="Completed buildkit initialization" Nov 5 15:52:33.566545 dockerd[2934]: time="2025-11-05T15:52:33.566499081Z" level=info msg="Daemon has completed initialization" Nov 5 15:52:33.566750 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:52:33.567098 dockerd[2934]: time="2025-11-05T15:52:33.567055458Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:52:34.620202 containerd[2489]: time="2025-11-05T15:52:34.620141375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:52:35.384173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489325362.mount: Deactivated successfully. Nov 5 15:52:36.528298 containerd[2489]: time="2025-11-05T15:52:36.528245074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:36.530467 containerd[2489]: time="2025-11-05T15:52:36.530434901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Nov 5 15:52:36.533058 containerd[2489]: time="2025-11-05T15:52:36.533018189Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:36.537830 containerd[2489]: time="2025-11-05T15:52:36.537776098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:36.538695 containerd[2489]: time="2025-11-05T15:52:36.538524493Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.91833825s" Nov 5 15:52:36.538695 containerd[2489]: time="2025-11-05T15:52:36.538577671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:52:36.539407 containerd[2489]: time="2025-11-05T15:52:36.539383578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:52:38.001929 containerd[2489]: time="2025-11-05T15:52:38.001884355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:38.004962 containerd[2489]: time="2025-11-05T15:52:38.004934382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Nov 5 15:52:38.008610 containerd[2489]: time="2025-11-05T15:52:38.008570226Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:38.012725 containerd[2489]: time="2025-11-05T15:52:38.012692365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:38.013535 containerd[2489]: time="2025-11-05T15:52:38.013386163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.473973873s" Nov 5 15:52:38.013535 containerd[2489]: time="2025-11-05T15:52:38.013416848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:52:38.013868 containerd[2489]: time="2025-11-05T15:52:38.013843091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:52:39.376945 containerd[2489]: time="2025-11-05T15:52:39.376897176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:39.379787 containerd[2489]: time="2025-11-05T15:52:39.379757784Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Nov 5 15:52:39.382958 containerd[2489]: time="2025-11-05T15:52:39.382918316Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:39.387243 containerd[2489]: time="2025-11-05T15:52:39.387199116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:39.388005 containerd[2489]: time="2025-11-05T15:52:39.387979429Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.374056426s" Nov 5 15:52:39.388049 containerd[2489]: time="2025-11-05T15:52:39.388009002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:52:39.388620 containerd[2489]: time="2025-11-05T15:52:39.388597398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:52:40.291890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075270915.mount: Deactivated successfully. Nov 5 15:52:40.669977 containerd[2489]: time="2025-11-05T15:52:40.669919389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:40.672374 containerd[2489]: time="2025-11-05T15:52:40.672337596Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Nov 5 15:52:40.674975 containerd[2489]: time="2025-11-05T15:52:40.674942084Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:40.678328 containerd[2489]: time="2025-11-05T15:52:40.678297879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:40.678706 containerd[2489]: time="2025-11-05T15:52:40.678684086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.290059038s" Nov 5 15:52:40.678776 containerd[2489]: time="2025-11-05T15:52:40.678763538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:52:40.679256 containerd[2489]: time="2025-11-05T15:52:40.679230454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:52:40.720258 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 5 15:52:41.079626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:52:41.081256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:41.490355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:41.496465 (kubelet)[3237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:41.529741 kubelet[3237]: E1105 15:52:41.529683 3237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:41.531402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:41.531532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:41.531825 systemd[1]: kubelet.service: Consumed 130ms CPU time, 109.5M memory peak. Nov 5 15:52:41.783875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319703127.mount: Deactivated successfully. Nov 5 15:52:43.416995 containerd[2489]: time="2025-11-05T15:52:43.416943948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.420133 containerd[2489]: time="2025-11-05T15:52:43.419994925Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 5 15:52:43.422798 containerd[2489]: time="2025-11-05T15:52:43.422773230Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.427291 containerd[2489]: time="2025-11-05T15:52:43.427250025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.428206 containerd[2489]: time="2025-11-05T15:52:43.427979115Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.748716346s" Nov 5 15:52:43.428206 containerd[2489]: time="2025-11-05T15:52:43.428008491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:52:43.428735 containerd[2489]: time="2025-11-05T15:52:43.428713854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:52:43.841761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425125355.mount: Deactivated successfully. Nov 5 15:52:43.862383 containerd[2489]: time="2025-11-05T15:52:43.862342401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:43.864680 containerd[2489]: time="2025-11-05T15:52:43.864648568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 5 15:52:43.867301 containerd[2489]: time="2025-11-05T15:52:43.867251752Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:43.870800 containerd[2489]: time="2025-11-05T15:52:43.870760549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:43.871369 containerd[2489]: time="2025-11-05T15:52:43.871137860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.39752ms" Nov 5 15:52:43.871369 containerd[2489]: time="2025-11-05T15:52:43.871165798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:52:43.871903 containerd[2489]: time="2025-11-05T15:52:43.871867961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:52:44.349545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474329212.mount: Deactivated successfully. Nov 5 15:52:45.980103 containerd[2489]: time="2025-11-05T15:52:45.980050405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:45.986188 containerd[2489]: time="2025-11-05T15:52:45.986009579Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Nov 5 15:52:45.988687 containerd[2489]: time="2025-11-05T15:52:45.988662981Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:45.992514 containerd[2489]: time="2025-11-05T15:52:45.992484017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:45.993323 containerd[2489]: time="2025-11-05T15:52:45.993191700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.121282167s" Nov 5 15:52:45.993323 containerd[2489]: time="2025-11-05T15:52:45.993220587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:52:47.673964 update_engine[2458]: I20251105 15:52:47.673317 2458 update_attempter.cc:509] Updating boot flags... Nov 5 15:52:48.583663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:48.583824 systemd[1]: kubelet.service: Consumed 130ms CPU time, 109.5M memory peak. Nov 5 15:52:48.586222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:48.609918 systemd[1]: Reload requested from client PID 3418 ('systemctl') (unit session-9.scope)... Nov 5 15:52:48.609932 systemd[1]: Reloading... Nov 5 15:52:48.699360 zram_generator::config[3466]: No configuration found. Nov 5 15:52:48.895974 systemd[1]: Reloading finished in 285 ms. Nov 5 15:52:49.008396 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:52:49.008478 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:52:49.008723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:49.008770 systemd[1]: kubelet.service: Consumed 83ms CPU time, 83.2M memory peak. Nov 5 15:52:49.010903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:49.502294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:49.508518 (kubelet)[3533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:52:49.552151 kubelet[3533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:49.552854 kubelet[3533]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:52:49.552854 kubelet[3533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:49.552854 kubelet[3533]: I1105 15:52:49.552521 3533 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:52:49.995721 kubelet[3533]: I1105 15:52:49.995685 3533 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:52:49.995721 kubelet[3533]: I1105 15:52:49.995709 3533 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:52:49.998041 kubelet[3533]: I1105 15:52:49.996412 3533 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:52:50.028772 kubelet[3533]: E1105 15:52:50.028523 3533 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:52:50.030850 kubelet[3533]: I1105 15:52:50.030826 3533 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:52:50.037054 kubelet[3533]: I1105 15:52:50.037031 3533 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:52:50.039583 kubelet[3533]: I1105 15:52:50.039560 3533 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:52:50.039822 kubelet[3533]: I1105 15:52:50.039791 3533 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:52:50.039966 kubelet[3533]: I1105 15:52:50.039819 3533 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-a-e6d953e7e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:52:50.040089 kubelet[3533]: I1105 15:52:50.039971 3533 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:52:50.040089 kubelet[3533]: I1105 15:52:50.039980 3533 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:52:50.040864 kubelet[3533]: I1105 15:52:50.040846 3533 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:50.043985 kubelet[3533]: I1105 15:52:50.043729 3533 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:52:50.043985 kubelet[3533]: I1105 15:52:50.043752 3533 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:52:50.043985 kubelet[3533]: I1105 15:52:50.043782 3533 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:52:50.043985 kubelet[3533]: I1105 15:52:50.043796 3533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:52:50.052252 kubelet[3533]: E1105 15:52:50.052226 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-a-e6d953e7e7&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:52:50.052686 kubelet[3533]: I1105 15:52:50.052675 3533 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:52:50.053316 kubelet[3533]: I1105 15:52:50.053225 3533 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:52:50.056094 kubelet[3533]: W1105 15:52:50.055464 3533 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:52:50.058319 kubelet[3533]: I1105 15:52:50.057857 3533 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:52:50.058319 kubelet[3533]: I1105 15:52:50.057904 3533 server.go:1289] "Started kubelet" Nov 5 15:52:50.065809 kubelet[3533]: I1105 15:52:50.065653 3533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:52:50.067265 kubelet[3533]: E1105 15:52:50.065858 3533 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-a-e6d953e7e7.1875273c8460cba3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-a-e6d953e7e7,UID:ci-4487.0.1-a-e6d953e7e7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-a-e6d953e7e7,},FirstTimestamp:2025-11-05 15:52:50.057874339 +0000 UTC m=+0.545319049,LastTimestamp:2025-11-05 15:52:50.057874339 +0000 UTC m=+0.545319049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-a-e6d953e7e7,}" Nov 5 15:52:50.068593 kubelet[3533]: E1105 15:52:50.068561 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:52:50.070626 kubelet[3533]: I1105 15:52:50.070435 3533 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:52:50.070800 kubelet[3533]: I1105 15:52:50.070763 3533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:52:50.071017 kubelet[3533]: I1105 15:52:50.071004 3533 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:52:50.072966 kubelet[3533]: I1105 15:52:50.072949 3533 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:52:50.073211 kubelet[3533]: E1105 15:52:50.073200 3533 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" Nov 5 15:52:50.074558 kubelet[3533]: I1105 15:52:50.074530 3533 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:52:50.077134 kubelet[3533]: E1105 15:52:50.077053 3533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-e6d953e7e7?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="200ms" Nov 5 15:52:50.077269 kubelet[3533]: I1105 15:52:50.077251 3533 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:52:50.077350 kubelet[3533]: I1105 15:52:50.077330 3533 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:52:50.078304 kubelet[3533]: I1105 15:52:50.072996 3533 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:52:50.078304 kubelet[3533]: I1105 15:52:50.078192 3533 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:52:50.078304 kubelet[3533]: I1105 15:52:50.078244 3533 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:52:50.079151 kubelet[3533]: E1105 15:52:50.079121 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:52:50.079361 kubelet[3533]: E1105 15:52:50.079346 3533 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:52:50.079809 kubelet[3533]: I1105 15:52:50.079790 3533 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:52:50.103837 kubelet[3533]: I1105 15:52:50.103820 3533 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:52:50.103837 kubelet[3533]: I1105 15:52:50.103831 3533 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:52:50.103941 kubelet[3533]: I1105 15:52:50.103867 3533 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:50.109668 kubelet[3533]: I1105 15:52:50.109654 3533 policy_none.go:49] "None policy: Start" Nov 5 15:52:50.109722 kubelet[3533]: I1105 15:52:50.109671 3533 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:52:50.109722 kubelet[3533]: I1105 15:52:50.109681 3533 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:52:50.118839 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:52:50.131210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:52:50.135612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:52:50.137508 kubelet[3533]: I1105 15:52:50.137485 3533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:52:50.138884 kubelet[3533]: I1105 15:52:50.138810 3533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:52:50.138884 kubelet[3533]: I1105 15:52:50.138830 3533 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:52:50.138884 kubelet[3533]: I1105 15:52:50.138848 3533 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:52:50.138884 kubelet[3533]: I1105 15:52:50.138855 3533 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:52:50.139024 kubelet[3533]: E1105 15:52:50.138888 3533 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:52:50.139987 kubelet[3533]: E1105 15:52:50.139962 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:52:50.140964 kubelet[3533]: E1105 15:52:50.140913 3533 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:52:50.141078 kubelet[3533]: I1105 15:52:50.141064 3533 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:52:50.141135 kubelet[3533]: I1105 15:52:50.141082 3533 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:52:50.141907 kubelet[3533]: I1105 15:52:50.141704 3533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:52:50.143982 kubelet[3533]: E1105 15:52:50.143961 3533 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:52:50.144045 kubelet[3533]: E1105 15:52:50.144008 3533 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-a-e6d953e7e7\" not found" Nov 5 15:52:50.242643 kubelet[3533]: I1105 15:52:50.242601 3533 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.243066 kubelet[3533]: E1105 15:52:50.243045 3533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.260046 systemd[1]: Created slice kubepods-burstable-pod545f29158b2245eb59c28689e68581ac.slice - libcontainer container kubepods-burstable-pod545f29158b2245eb59c28689e68581ac.slice. Nov 5 15:52:50.268162 kubelet[3533]: E1105 15:52:50.267967 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.271387 systemd[1]: Created slice kubepods-burstable-pod748e315011c9eeb33e2b5958cfa538d0.slice - libcontainer container kubepods-burstable-pod748e315011c9eeb33e2b5958cfa538d0.slice. Nov 5 15:52:50.273078 kubelet[3533]: E1105 15:52:50.273057 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.277458 kubelet[3533]: E1105 15:52:50.277425 3533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-e6d953e7e7?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="400ms" Nov 5 15:52:50.279692 kubelet[3533]: I1105 15:52:50.279668 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279750 kubelet[3533]: I1105 15:52:50.279694 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279750 kubelet[3533]: I1105 15:52:50.279713 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279750 kubelet[3533]: I1105 15:52:50.279737 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279826 kubelet[3533]: I1105 15:52:50.279755 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279826 kubelet[3533]: I1105 15:52:50.279773 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279826 kubelet[3533]: I1105 15:52:50.279791 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279826 kubelet[3533]: I1105 15:52:50.279814 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.279916 kubelet[3533]: I1105 15:52:50.279833 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6f4124ad6284756727e190ca12f907-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-a-e6d953e7e7\" (UID: \"ef6f4124ad6284756727e190ca12f907\") " pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.305114 kubelet[3533]: E1105 15:52:50.305023 3533 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-a-e6d953e7e7.1875273c8460cba3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-a-e6d953e7e7,UID:ci-4487.0.1-a-e6d953e7e7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-a-e6d953e7e7,},FirstTimestamp:2025-11-05 15:52:50.057874339 +0000 UTC m=+0.545319049,LastTimestamp:2025-11-05 15:52:50.057874339 +0000 UTC m=+0.545319049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-a-e6d953e7e7,}" Nov 5 15:52:50.444962 kubelet[3533]: I1105 15:52:50.444926 3533 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.445236 kubelet[3533]: E1105 15:52:50.445211 3533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.477436 systemd[1]: Created slice kubepods-burstable-podef6f4124ad6284756727e190ca12f907.slice - libcontainer container kubepods-burstable-podef6f4124ad6284756727e190ca12f907.slice. Nov 5 15:52:50.479512 kubelet[3533]: E1105 15:52:50.479058 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.479873 containerd[2489]: time="2025-11-05T15:52:50.479847559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-a-e6d953e7e7,Uid:ef6f4124ad6284756727e190ca12f907,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:50.569169 containerd[2489]: time="2025-11-05T15:52:50.569089820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-a-e6d953e7e7,Uid:545f29158b2245eb59c28689e68581ac,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:50.574054 containerd[2489]: time="2025-11-05T15:52:50.574027140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-a-e6d953e7e7,Uid:748e315011c9eeb33e2b5958cfa538d0,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:50.678074 kubelet[3533]: E1105 15:52:50.678022 3533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-e6d953e7e7?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="800ms" Nov 5 15:52:50.728777 containerd[2489]: time="2025-11-05T15:52:50.728695004Z" level=info msg="connecting to shim 6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773" address="unix:///run/containerd/s/25407eefcf5effa8f5d28a0fe0a8ea27a8449fcf28eecb3831204e0e7f13e7ac" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:50.760612 systemd[1]: Started cri-containerd-6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773.scope - libcontainer container 6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773. Nov 5 15:52:50.771982 containerd[2489]: time="2025-11-05T15:52:50.771666985Z" level=info msg="connecting to shim 4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6" address="unix:///run/containerd/s/fd413d9eae316697b143ae10fc51b9dcaee8a7902c11c5a3fb5db34ea171a718" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:50.782851 containerd[2489]: time="2025-11-05T15:52:50.782818548Z" level=info msg="connecting to shim 38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e" address="unix:///run/containerd/s/5337f4d14c75a0391b2ad28d49b6cee32b7ed09d3d26f4efeb25beba8ac83db9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:50.818310 systemd[1]: Started cri-containerd-38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e.scope - libcontainer container 38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e. Nov 5 15:52:50.821027 systemd[1]: Started cri-containerd-4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6.scope - libcontainer container 4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6. Nov 5 15:52:50.848115 kubelet[3533]: I1105 15:52:50.848097 3533 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.848708 kubelet[3533]: E1105 15:52:50.848683 3533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.46:6443/api/v1/nodes\": dial tcp 10.200.8.46:6443: connect: connection refused" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:50.856855 containerd[2489]: time="2025-11-05T15:52:50.856829187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-a-e6d953e7e7,Uid:ef6f4124ad6284756727e190ca12f907,Namespace:kube-system,Attempt:0,} returns sandbox id \"6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773\"" Nov 5 15:52:50.865630 containerd[2489]: time="2025-11-05T15:52:50.865356499Z" level=info msg="CreateContainer within sandbox \"6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:52:50.892312 containerd[2489]: time="2025-11-05T15:52:50.892147760Z" level=info msg="Container 43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:50.901481 containerd[2489]: time="2025-11-05T15:52:50.901453213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-a-e6d953e7e7,Uid:748e315011c9eeb33e2b5958cfa538d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e\"" Nov 5 15:52:50.933173 kubelet[3533]: E1105 15:52:50.933149 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:52:50.951665 kubelet[3533]: E1105 15:52:50.951643 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-a-e6d953e7e7&limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:52:51.202884 containerd[2489]: time="2025-11-05T15:52:51.202843687Z" level=info msg="CreateContainer within sandbox \"38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:52:51.320520 containerd[2489]: time="2025-11-05T15:52:51.320409407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-a-e6d953e7e7,Uid:545f29158b2245eb59c28689e68581ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6\"" Nov 5 15:52:51.326998 containerd[2489]: time="2025-11-05T15:52:51.326928575Z" level=info msg="CreateContainer within sandbox \"4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:52:51.330670 containerd[2489]: time="2025-11-05T15:52:51.330580899Z" level=info msg="CreateContainer within sandbox \"6730d86c9b47cecc5645ffb34da0cd0a784301f117f0abc2d22e0b29505ef773\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77\"" Nov 5 15:52:51.331368 containerd[2489]: time="2025-11-05T15:52:51.331337852Z" level=info msg="StartContainer for \"43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77\"" Nov 5 15:52:51.332328 containerd[2489]: time="2025-11-05T15:52:51.332303653Z" level=info msg="connecting to shim 43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77" address="unix:///run/containerd/s/25407eefcf5effa8f5d28a0fe0a8ea27a8449fcf28eecb3831204e0e7f13e7ac" protocol=ttrpc version=3 Nov 5 15:52:51.344204 containerd[2489]: time="2025-11-05T15:52:51.344180945Z" level=info msg="Container 2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:51.349571 systemd[1]: Started cri-containerd-43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77.scope - libcontainer container 43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77. Nov 5 15:52:51.379938 containerd[2489]: time="2025-11-05T15:52:51.379185852Z" level=info msg="CreateContainer within sandbox \"38df2c8509f94ad4046f909682baf98570a7b6739aae15f77798aec4b39e5e0e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25\"" Nov 5 15:52:51.379938 containerd[2489]: time="2025-11-05T15:52:51.379742957Z" level=info msg="StartContainer for \"2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25\"" Nov 5 15:52:51.381982 containerd[2489]: time="2025-11-05T15:52:51.381554176Z" level=info msg="connecting to shim 2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25" address="unix:///run/containerd/s/5337f4d14c75a0391b2ad28d49b6cee32b7ed09d3d26f4efeb25beba8ac83db9" protocol=ttrpc version=3 Nov 5 15:52:51.384980 containerd[2489]: time="2025-11-05T15:52:51.384958491Z" level=info msg="Container 7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:51.405380 kubelet[3533]: E1105 15:52:51.405332 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:52:51.406635 systemd[1]: Started cri-containerd-2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25.scope - libcontainer container 2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25. Nov 5 15:52:51.415537 containerd[2489]: time="2025-11-05T15:52:51.415509839Z" level=info msg="StartContainer for \"43423fac26a9f5d1e2b98f7319d6438b6ba91a216cfb874ccb93064993600f77\" returns successfully" Nov 5 15:52:51.435110 containerd[2489]: time="2025-11-05T15:52:51.435030858Z" level=info msg="CreateContainer within sandbox \"4a7df01d807648bdebfb4f6551e5ac4471d5248294f8a1ee3b75ab29daaedcd6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9\"" Nov 5 15:52:51.435500 containerd[2489]: time="2025-11-05T15:52:51.435483952Z" level=info msg="StartContainer for \"7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9\"" Nov 5 15:52:51.438430 containerd[2489]: time="2025-11-05T15:52:51.438365435Z" level=info msg="connecting to shim 7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9" address="unix:///run/containerd/s/fd413d9eae316697b143ae10fc51b9dcaee8a7902c11c5a3fb5db34ea171a718" protocol=ttrpc version=3 Nov 5 15:52:51.459407 kubelet[3533]: E1105 15:52:51.459331 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:52:51.466545 systemd[1]: Started cri-containerd-7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9.scope - libcontainer container 7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9. Nov 5 15:52:51.478475 kubelet[3533]: E1105 15:52:51.478432 3533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-e6d953e7e7?timeout=10s\": dial tcp 10.200.8.46:6443: connect: connection refused" interval="1.6s" Nov 5 15:52:51.491460 containerd[2489]: time="2025-11-05T15:52:51.491438577Z" level=info msg="StartContainer for \"2bd4f675a3421d1b1b1f59acec4c7b9e21969c816d159ac83aeb2d9e90dd5f25\" returns successfully" Nov 5 15:52:51.525617 containerd[2489]: time="2025-11-05T15:52:51.525576170Z" level=info msg="StartContainer for \"7e0143439e8504661a6360ad2b219d23f5bbc757277c08bcabbf316373e2cfd9\" returns successfully" Nov 5 15:52:51.651703 kubelet[3533]: I1105 15:52:51.651680 3533 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:52.149854 kubelet[3533]: E1105 15:52:52.149591 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:52.155936 kubelet[3533]: E1105 15:52:52.155799 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:52.156636 kubelet[3533]: E1105 15:52:52.156623 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:53.160181 kubelet[3533]: E1105 15:52:53.159689 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:53.160181 kubelet[3533]: E1105 15:52:53.159953 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:53.160181 kubelet[3533]: E1105 15:52:53.160108 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.072219 kubelet[3533]: E1105 15:52:54.072170 3533 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.161304 kubelet[3533]: E1105 15:52:54.159899 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.161304 kubelet[3533]: E1105 15:52:54.159940 3533 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-e6d953e7e7\" not found" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.169019 kubelet[3533]: I1105 15:52:54.168990 3533 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.169126 kubelet[3533]: E1105 15:52:54.169048 3533 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.1-a-e6d953e7e7\": node \"ci-4487.0.1-a-e6d953e7e7\" not found" Nov 5 15:52:54.173867 kubelet[3533]: I1105 15:52:54.173839 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.239483 kubelet[3533]: E1105 15:52:54.239453 3533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.239483 kubelet[3533]: I1105 15:52:54.239481 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.241678 kubelet[3533]: E1105 15:52:54.241651 3533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.241678 kubelet[3533]: I1105 15:52:54.241677 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:54.247444 kubelet[3533]: E1105 15:52:54.247397 3533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-a-e6d953e7e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:55.063220 kubelet[3533]: I1105 15:52:55.063184 3533 apiserver.go:52] "Watching apiserver" Nov 5 15:52:55.078378 kubelet[3533]: I1105 15:52:55.078348 3533 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:52:55.966051 systemd[1]: Reload requested from client PID 3811 ('systemctl') (unit session-9.scope)... Nov 5 15:52:55.966068 systemd[1]: Reloading... Nov 5 15:52:56.045346 zram_generator::config[3858]: No configuration found. Nov 5 15:52:56.267409 systemd[1]: Reloading finished in 301 ms. Nov 5 15:52:56.297548 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:56.318197 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:52:56.318444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:56.318500 systemd[1]: kubelet.service: Consumed 855ms CPU time, 130.6M memory peak. Nov 5 15:52:56.320101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:57.900699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:57.914524 (kubelet)[3926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:52:57.954524 kubelet[3926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:57.954524 kubelet[3926]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:52:57.954524 kubelet[3926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:57.954524 kubelet[3926]: I1105 15:52:57.954348 3926 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:52:57.961003 kubelet[3926]: I1105 15:52:57.960941 3926 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:52:57.961003 kubelet[3926]: I1105 15:52:57.960962 3926 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:52:57.962244 kubelet[3926]: I1105 15:52:57.962214 3926 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:52:57.965754 kubelet[3926]: I1105 15:52:57.965616 3926 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:52:57.968029 kubelet[3926]: I1105 15:52:57.968009 3926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:52:57.973036 kubelet[3926]: I1105 15:52:57.973012 3926 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:52:57.975991 kubelet[3926]: I1105 15:52:57.975974 3926 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:52:57.976176 kubelet[3926]: I1105 15:52:57.976150 3926 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:52:57.976395 kubelet[3926]: I1105 15:52:57.976180 3926 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-a-e6d953e7e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:52:57.976600 kubelet[3926]: I1105 15:52:57.976406 3926 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:52:57.976600 kubelet[3926]: I1105 15:52:57.976416 3926 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:52:57.976600 kubelet[3926]: I1105 15:52:57.976467 3926 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:57.976671 kubelet[3926]: I1105 15:52:57.976611 3926 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:52:57.976671 kubelet[3926]: I1105 15:52:57.976622 3926 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:52:57.976671 kubelet[3926]: I1105 15:52:57.976641 3926 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:52:57.978314 kubelet[3926]: I1105 15:52:57.977318 3926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:52:57.980194 kubelet[3926]: I1105 15:52:57.980175 3926 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:52:57.980678 kubelet[3926]: I1105 15:52:57.980663 3926 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:52:57.984246 kubelet[3926]: I1105 15:52:57.984231 3926 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:52:57.984400 kubelet[3926]: I1105 15:52:57.984393 3926 server.go:1289] "Started kubelet" Nov 5 15:52:57.993432 kubelet[3926]: I1105 15:52:57.993419 3926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:52:58.002405 kubelet[3926]: I1105 15:52:58.002354 3926 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:52:58.003250 kubelet[3926]: I1105 15:52:58.003227 3926 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:52:58.009477 kubelet[3926]: I1105 15:52:58.009435 3926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:52:58.009599 kubelet[3926]: I1105 15:52:58.009586 3926 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:52:58.009754 kubelet[3926]: I1105 15:52:58.009742 3926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:52:58.012120 kubelet[3926]: I1105 15:52:58.011492 3926 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:52:58.012120 kubelet[3926]: I1105 15:52:58.011681 3926 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:52:58.012120 kubelet[3926]: I1105 15:52:58.011769 3926 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:52:58.013657 kubelet[3926]: I1105 15:52:58.013636 3926 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:52:58.013797 kubelet[3926]: I1105 15:52:58.013782 3926 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:52:58.016799 kubelet[3926]: E1105 15:52:58.016768 3926 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:52:58.017430 kubelet[3926]: I1105 15:52:58.017411 3926 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:52:58.029716 kubelet[3926]: I1105 15:52:58.029668 3926 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:52:58.031567 kubelet[3926]: I1105 15:52:58.031338 3926 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:52:58.031567 kubelet[3926]: I1105 15:52:58.031357 3926 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:52:58.031567 kubelet[3926]: I1105 15:52:58.031375 3926 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:52:58.031567 kubelet[3926]: I1105 15:52:58.031382 3926 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:52:58.031567 kubelet[3926]: E1105 15:52:58.031416 3926 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:52:58.093563 kubelet[3926]: I1105 15:52:58.093548 3926 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:52:58.094113 kubelet[3926]: I1105 15:52:58.094096 3926 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:52:58.094173 kubelet[3926]: I1105 15:52:58.094167 3926 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:58.094319 kubelet[3926]: I1105 15:52:58.094310 3926 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:52:58.094383 kubelet[3926]: I1105 15:52:58.094366 3926 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:52:58.095012 kubelet[3926]: I1105 15:52:58.094419 3926 policy_none.go:49] "None policy: Start" Nov 5 15:52:58.095012 kubelet[3926]: I1105 15:52:58.094430 3926 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:52:58.095012 kubelet[3926]: I1105 15:52:58.094438 3926 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:52:58.095012 kubelet[3926]: I1105 15:52:58.094543 3926 state_mem.go:75] "Updated machine memory state" Nov 5 15:52:58.100200 kubelet[3926]: E1105 15:52:58.100187 3926 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:52:58.100460 kubelet[3926]: I1105 15:52:58.100445 3926 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:52:58.100511 kubelet[3926]: I1105 15:52:58.100461 3926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:52:58.102795 kubelet[3926]: I1105 15:52:58.102782 3926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:52:58.104661 kubelet[3926]: E1105 15:52:58.104645 3926 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:52:58.132526 kubelet[3926]: I1105 15:52:58.132505 3926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.132908 kubelet[3926]: I1105 15:52:58.132738 3926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.133390 kubelet[3926]: I1105 15:52:58.132810 3926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.141067 kubelet[3926]: I1105 15:52:58.141029 3926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:52:58.149776 kubelet[3926]: I1105 15:52:58.149759 3926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:52:58.150009 kubelet[3926]: I1105 15:52:58.149875 3926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:52:58.206559 kubelet[3926]: I1105 15:52:58.206408 3926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.211936 kubelet[3926]: I1105 15:52:58.211918 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6f4124ad6284756727e190ca12f907-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-a-e6d953e7e7\" (UID: \"ef6f4124ad6284756727e190ca12f907\") " pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212247 kubelet[3926]: I1105 15:52:58.212187 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212247 kubelet[3926]: I1105 15:52:58.212205 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212247 kubelet[3926]: I1105 15:52:58.212225 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212480 kubelet[3926]: I1105 15:52:58.212441 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212595 kubelet[3926]: I1105 15:52:58.212466 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/545f29158b2245eb59c28689e68581ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-a-e6d953e7e7\" (UID: \"545f29158b2245eb59c28689e68581ac\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212595 kubelet[3926]: I1105 15:52:58.212558 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212730 kubelet[3926]: I1105 15:52:58.212679 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.212730 kubelet[3926]: I1105 15:52:58.212701 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/748e315011c9eeb33e2b5958cfa538d0-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-a-e6d953e7e7\" (UID: \"748e315011c9eeb33e2b5958cfa538d0\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.218895 kubelet[3926]: I1105 15:52:58.218854 3926 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.219027 kubelet[3926]: I1105 15:52:58.219001 3926 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:58.979738 kubelet[3926]: I1105 15:52:58.979706 3926 apiserver.go:52] "Watching apiserver" Nov 5 15:52:59.012273 kubelet[3926]: I1105 15:52:59.012229 3926 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:52:59.062099 kubelet[3926]: I1105 15:52:59.062072 3926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:59.071296 kubelet[3926]: I1105 15:52:59.068978 3926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:52:59.071296 kubelet[3926]: E1105 15:52:59.069030 3926 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-a-e6d953e7e7\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" Nov 5 15:52:59.083552 kubelet[3926]: I1105 15:52:59.083501 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-a-e6d953e7e7" podStartSLOduration=1.083467588 podStartE2EDuration="1.083467588s" podCreationTimestamp="2025-11-05 15:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:59.082572207 +0000 UTC m=+1.163828861" watchObservedRunningTime="2025-11-05 15:52:59.083467588 +0000 UTC m=+1.164724238" Nov 5 15:52:59.105816 kubelet[3926]: I1105 15:52:59.105744 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-e6d953e7e7" podStartSLOduration=1.105728796 podStartE2EDuration="1.105728796s" podCreationTimestamp="2025-11-05 15:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:59.097103344 +0000 UTC m=+1.178359987" watchObservedRunningTime="2025-11-05 15:52:59.105728796 +0000 UTC m=+1.186985442" Nov 5 15:52:59.106586 kubelet[3926]: I1105 15:52:59.106461 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-a-e6d953e7e7" podStartSLOduration=1.106447739 podStartE2EDuration="1.106447739s" podCreationTimestamp="2025-11-05 15:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:59.105700699 +0000 UTC m=+1.186957367" watchObservedRunningTime="2025-11-05 15:52:59.106447739 +0000 UTC m=+1.187704419" Nov 5 15:53:02.127568 kubelet[3926]: I1105 15:53:02.127492 3926 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:53:02.128991 containerd[2489]: time="2025-11-05T15:53:02.128818154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:53:02.130412 kubelet[3926]: I1105 15:53:02.130117 3926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:53:02.996904 systemd[1]: Created slice kubepods-besteffort-pod3400109c_2dbf_48a8_85c5_ef01832be693.slice - libcontainer container kubepods-besteffort-pod3400109c_2dbf_48a8_85c5_ef01832be693.slice. Nov 5 15:53:03.038273 kubelet[3926]: I1105 15:53:03.038235 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3400109c-2dbf-48a8-85c5-ef01832be693-xtables-lock\") pod \"kube-proxy-vlhg4\" (UID: \"3400109c-2dbf-48a8-85c5-ef01832be693\") " pod="kube-system/kube-proxy-vlhg4" Nov 5 15:53:03.038401 kubelet[3926]: I1105 15:53:03.038295 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3400109c-2dbf-48a8-85c5-ef01832be693-lib-modules\") pod \"kube-proxy-vlhg4\" (UID: \"3400109c-2dbf-48a8-85c5-ef01832be693\") " pod="kube-system/kube-proxy-vlhg4" Nov 5 15:53:03.038401 kubelet[3926]: I1105 15:53:03.038317 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3400109c-2dbf-48a8-85c5-ef01832be693-kube-proxy\") pod \"kube-proxy-vlhg4\" (UID: \"3400109c-2dbf-48a8-85c5-ef01832be693\") " pod="kube-system/kube-proxy-vlhg4" Nov 5 15:53:03.038401 kubelet[3926]: I1105 15:53:03.038334 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xctf\" (UniqueName: \"kubernetes.io/projected/3400109c-2dbf-48a8-85c5-ef01832be693-kube-api-access-5xctf\") pod \"kube-proxy-vlhg4\" (UID: \"3400109c-2dbf-48a8-85c5-ef01832be693\") " pod="kube-system/kube-proxy-vlhg4" Nov 5 15:53:03.083128 systemd[1]: Created slice kubepods-besteffort-pod86f419e8_3d14_4fd5_8e82_f2d84f5c09e0.slice - libcontainer container kubepods-besteffort-pod86f419e8_3d14_4fd5_8e82_f2d84f5c09e0.slice. Nov 5 15:53:03.139303 kubelet[3926]: I1105 15:53:03.138857 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kjg\" (UniqueName: \"kubernetes.io/projected/86f419e8-3d14-4fd5-8e82-f2d84f5c09e0-kube-api-access-26kjg\") pod \"tigera-operator-7dcd859c48-lsc47\" (UID: \"86f419e8-3d14-4fd5-8e82-f2d84f5c09e0\") " pod="tigera-operator/tigera-operator-7dcd859c48-lsc47" Nov 5 15:53:03.139303 kubelet[3926]: I1105 15:53:03.138926 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86f419e8-3d14-4fd5-8e82-f2d84f5c09e0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lsc47\" (UID: \"86f419e8-3d14-4fd5-8e82-f2d84f5c09e0\") " pod="tigera-operator/tigera-operator-7dcd859c48-lsc47" Nov 5 15:53:03.305874 containerd[2489]: time="2025-11-05T15:53:03.305767002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlhg4,Uid:3400109c-2dbf-48a8-85c5-ef01832be693,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:03.343338 containerd[2489]: time="2025-11-05T15:53:03.342608896Z" level=info msg="connecting to shim 567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02" address="unix:///run/containerd/s/9951beadad6b21663470599ad909255fff8febac4baf498fe03ba93aed661e73" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:03.366452 systemd[1]: Started cri-containerd-567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02.scope - libcontainer container 567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02. Nov 5 15:53:03.388677 containerd[2489]: time="2025-11-05T15:53:03.388646199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lsc47,Uid:86f419e8-3d14-4fd5-8e82-f2d84f5c09e0,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:53:03.390342 containerd[2489]: time="2025-11-05T15:53:03.390322807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlhg4,Uid:3400109c-2dbf-48a8-85c5-ef01832be693,Namespace:kube-system,Attempt:0,} returns sandbox id \"567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02\"" Nov 5 15:53:03.399527 containerd[2489]: time="2025-11-05T15:53:03.399434537Z" level=info msg="CreateContainer within sandbox \"567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:53:03.428182 containerd[2489]: time="2025-11-05T15:53:03.428156651Z" level=info msg="Container 583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:03.452774 containerd[2489]: time="2025-11-05T15:53:03.452739488Z" level=info msg="CreateContainer within sandbox \"567ee5341ba2ab77672cdc181960478cb9dcf5770be671688fcf265843c90c02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544\"" Nov 5 15:53:03.453559 containerd[2489]: time="2025-11-05T15:53:03.453487372Z" level=info msg="StartContainer for \"583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544\"" Nov 5 15:53:03.456368 containerd[2489]: time="2025-11-05T15:53:03.456340837Z" level=info msg="connecting to shim 583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544" address="unix:///run/containerd/s/9951beadad6b21663470599ad909255fff8febac4baf498fe03ba93aed661e73" protocol=ttrpc version=3 Nov 5 15:53:03.460056 containerd[2489]: time="2025-11-05T15:53:03.460025909Z" level=info msg="connecting to shim 4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432" address="unix:///run/containerd/s/66bf5cd51e2c857240e37dc67533e5b8a020cb1dc8625ad48395d436a7ae5c9d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:03.478446 systemd[1]: Started cri-containerd-583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544.scope - libcontainer container 583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544. Nov 5 15:53:03.491434 systemd[1]: Started cri-containerd-4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432.scope - libcontainer container 4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432. Nov 5 15:53:03.526893 containerd[2489]: time="2025-11-05T15:53:03.526817411Z" level=info msg="StartContainer for \"583c0f03e7e0cefcbbce59eca4cbb93d683e9ccce76ab535fdfa7c4a98678544\" returns successfully" Nov 5 15:53:03.549925 containerd[2489]: time="2025-11-05T15:53:03.549902741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lsc47,Uid:86f419e8-3d14-4fd5-8e82-f2d84f5c09e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432\"" Nov 5 15:53:03.551744 containerd[2489]: time="2025-11-05T15:53:03.551687898Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:53:04.128736 kubelet[3926]: I1105 15:53:04.128337 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vlhg4" podStartSLOduration=2.128269238 podStartE2EDuration="2.128269238s" podCreationTimestamp="2025-11-05 15:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:04.105225614 +0000 UTC m=+6.186482259" watchObservedRunningTime="2025-11-05 15:53:04.128269238 +0000 UTC m=+6.209525879" Nov 5 15:53:05.000749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730330739.mount: Deactivated successfully. Nov 5 15:53:06.146454 containerd[2489]: time="2025-11-05T15:53:06.146412598Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:06.150306 containerd[2489]: time="2025-11-05T15:53:06.150185504Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:53:06.153458 containerd[2489]: time="2025-11-05T15:53:06.153416146Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:06.157306 containerd[2489]: time="2025-11-05T15:53:06.157184442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:06.157931 containerd[2489]: time="2025-11-05T15:53:06.157602886Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.605886759s" Nov 5 15:53:06.157931 containerd[2489]: time="2025-11-05T15:53:06.157632326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:53:06.164295 containerd[2489]: time="2025-11-05T15:53:06.164252786Z" level=info msg="CreateContainer within sandbox \"4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:53:06.195994 containerd[2489]: time="2025-11-05T15:53:06.195448736Z" level=info msg="Container 5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:06.212876 containerd[2489]: time="2025-11-05T15:53:06.212843495Z" level=info msg="CreateContainer within sandbox \"4becfed5f838c0cb7e93619f5d5f65698de9ba5453ffb95ea029d0c7f0ae1432\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f\"" Nov 5 15:53:06.214244 containerd[2489]: time="2025-11-05T15:53:06.213390318Z" level=info msg="StartContainer for \"5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f\"" Nov 5 15:53:06.214244 containerd[2489]: time="2025-11-05T15:53:06.214185425Z" level=info msg="connecting to shim 5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f" address="unix:///run/containerd/s/66bf5cd51e2c857240e37dc67533e5b8a020cb1dc8625ad48395d436a7ae5c9d" protocol=ttrpc version=3 Nov 5 15:53:06.233535 systemd[1]: Started cri-containerd-5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f.scope - libcontainer container 5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f. Nov 5 15:53:06.264878 containerd[2489]: time="2025-11-05T15:53:06.264852385Z" level=info msg="StartContainer for \"5a1b628a232cb528fddc83c30e4a5cac7ba8646f9fa93a4af6cbb9394f0d796f\" returns successfully" Nov 5 15:53:08.365152 kubelet[3926]: I1105 15:53:08.365032 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lsc47" podStartSLOduration=2.757981594 podStartE2EDuration="5.365014345s" podCreationTimestamp="2025-11-05 15:53:03 +0000 UTC" firstStartedPulling="2025-11-05 15:53:03.551173764 +0000 UTC m=+5.632430414" lastFinishedPulling="2025-11-05 15:53:06.158206527 +0000 UTC m=+8.239463165" observedRunningTime="2025-11-05 15:53:07.139054013 +0000 UTC m=+9.220310683" watchObservedRunningTime="2025-11-05 15:53:08.365014345 +0000 UTC m=+10.446270991" Nov 5 15:53:11.927510 sudo[2916]: pam_unix(sudo:session): session closed for user root Nov 5 15:53:12.041058 sshd[2915]: Connection closed by 10.200.16.10 port 39592 Nov 5 15:53:12.041570 sshd-session[2912]: pam_unix(sshd:session): session closed for user core Nov 5 15:53:12.045788 systemd-logind[2457]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:53:12.047972 systemd[1]: sshd@6-10.200.8.46:22-10.200.16.10:39592.service: Deactivated successfully. Nov 5 15:53:12.052463 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:53:12.053007 systemd[1]: session-9.scope: Consumed 3.725s CPU time, 231.1M memory peak. Nov 5 15:53:12.060707 systemd-logind[2457]: Removed session 9. Nov 5 15:53:16.583964 systemd[1]: Created slice kubepods-besteffort-podca680b2d_8cb6_4508_85cb_e059d2d1ca25.slice - libcontainer container kubepods-besteffort-podca680b2d_8cb6_4508_85cb_e059d2d1ca25.slice. Nov 5 15:53:16.616848 kubelet[3926]: I1105 15:53:16.616347 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca680b2d-8cb6-4508-85cb-e059d2d1ca25-tigera-ca-bundle\") pod \"calico-typha-76d86c9fcb-2csh7\" (UID: \"ca680b2d-8cb6-4508-85cb-e059d2d1ca25\") " pod="calico-system/calico-typha-76d86c9fcb-2csh7" Nov 5 15:53:16.616848 kubelet[3926]: I1105 15:53:16.616393 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs69x\" (UniqueName: \"kubernetes.io/projected/ca680b2d-8cb6-4508-85cb-e059d2d1ca25-kube-api-access-xs69x\") pod \"calico-typha-76d86c9fcb-2csh7\" (UID: \"ca680b2d-8cb6-4508-85cb-e059d2d1ca25\") " pod="calico-system/calico-typha-76d86c9fcb-2csh7" Nov 5 15:53:16.616848 kubelet[3926]: I1105 15:53:16.616418 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca680b2d-8cb6-4508-85cb-e059d2d1ca25-typha-certs\") pod \"calico-typha-76d86c9fcb-2csh7\" (UID: \"ca680b2d-8cb6-4508-85cb-e059d2d1ca25\") " pod="calico-system/calico-typha-76d86c9fcb-2csh7" Nov 5 15:53:16.813580 systemd[1]: Created slice kubepods-besteffort-podc91e0c17_7049_4eb2_b9e2_06c2f175aead.slice - libcontainer container kubepods-besteffort-podc91e0c17_7049_4eb2_b9e2_06c2f175aead.slice. Nov 5 15:53:16.817569 kubelet[3926]: I1105 15:53:16.817533 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-flexvol-driver-host\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817678 kubelet[3926]: I1105 15:53:16.817594 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-policysync\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817678 kubelet[3926]: I1105 15:53:16.817614 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-var-run-calico\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817678 kubelet[3926]: I1105 15:53:16.817635 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-cni-net-dir\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817678 kubelet[3926]: I1105 15:53:16.817650 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-lib-modules\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817678 kubelet[3926]: I1105 15:53:16.817668 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgm5j\" (UniqueName: \"kubernetes.io/projected/c91e0c17-7049-4eb2-b9e2-06c2f175aead-kube-api-access-xgm5j\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817807 kubelet[3926]: I1105 15:53:16.817684 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-cni-log-dir\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817807 kubelet[3926]: I1105 15:53:16.817702 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-cni-bin-dir\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817807 kubelet[3926]: I1105 15:53:16.817720 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-xtables-lock\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817807 kubelet[3926]: I1105 15:53:16.817740 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c91e0c17-7049-4eb2-b9e2-06c2f175aead-node-certs\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817807 kubelet[3926]: I1105 15:53:16.817756 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c91e0c17-7049-4eb2-b9e2-06c2f175aead-tigera-ca-bundle\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.817933 kubelet[3926]: I1105 15:53:16.817774 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c91e0c17-7049-4eb2-b9e2-06c2f175aead-var-lib-calico\") pod \"calico-node-jkmk8\" (UID: \"c91e0c17-7049-4eb2-b9e2-06c2f175aead\") " pod="calico-system/calico-node-jkmk8" Nov 5 15:53:16.890665 containerd[2489]: time="2025-11-05T15:53:16.890436882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d86c9fcb-2csh7,Uid:ca680b2d-8cb6-4508-85cb-e059d2d1ca25,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:16.924929 kubelet[3926]: E1105 15:53:16.924897 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:16.925096 kubelet[3926]: W1105 15:53:16.925039 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:16.925096 kubelet[3926]: E1105 15:53:16.925067 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:16.932880 kubelet[3926]: E1105 15:53:16.932817 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:16.932880 kubelet[3926]: W1105 15:53:16.932835 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:16.932880 kubelet[3926]: E1105 15:53:16.932850 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:16.944255 containerd[2489]: time="2025-11-05T15:53:16.944183858Z" level=info msg="connecting to shim e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824" address="unix:///run/containerd/s/7fa7b8dea59319aa4e35110ff6af063a126978534204e7f680a74b33d09cbcfa" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:16.973656 systemd[1]: Started cri-containerd-e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824.scope - libcontainer container e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824. Nov 5 15:53:17.060591 kubelet[3926]: E1105 15:53:17.060345 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:17.114098 kubelet[3926]: E1105 15:53:17.114027 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.114098 kubelet[3926]: W1105 15:53:17.114045 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.114098 kubelet[3926]: E1105 15:53:17.114062 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.114453 kubelet[3926]: E1105 15:53:17.114375 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.114453 kubelet[3926]: W1105 15:53:17.114383 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.114453 kubelet[3926]: E1105 15:53:17.114392 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.114583 kubelet[3926]: E1105 15:53:17.114576 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.114709 kubelet[3926]: W1105 15:53:17.114619 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.114709 kubelet[3926]: E1105 15:53:17.114638 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.114857 kubelet[3926]: E1105 15:53:17.114851 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.114921 kubelet[3926]: W1105 15:53:17.114893 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.114921 kubelet[3926]: E1105 15:53:17.114902 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.115143 kubelet[3926]: E1105 15:53:17.115086 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.115143 kubelet[3926]: W1105 15:53:17.115093 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.115143 kubelet[3926]: E1105 15:53:17.115100 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.115303 kubelet[3926]: E1105 15:53:17.115298 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.115372 kubelet[3926]: W1105 15:53:17.115335 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.115372 kubelet[3926]: E1105 15:53:17.115343 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.115588 kubelet[3926]: E1105 15:53:17.115518 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.115588 kubelet[3926]: W1105 15:53:17.115525 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.115588 kubelet[3926]: E1105 15:53:17.115532 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.115702 kubelet[3926]: E1105 15:53:17.115697 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.115738 kubelet[3926]: W1105 15:53:17.115733 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.115857 kubelet[3926]: E1105 15:53:17.115781 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.115949 kubelet[3926]: E1105 15:53:17.115942 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.115990 kubelet[3926]: W1105 15:53:17.115985 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.116050 kubelet[3926]: E1105 15:53:17.116043 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.116261 kubelet[3926]: E1105 15:53:17.116206 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.116261 kubelet[3926]: W1105 15:53:17.116215 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.116261 kubelet[3926]: E1105 15:53:17.116222 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.116485 kubelet[3926]: E1105 15:53:17.116468 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.116485 kubelet[3926]: W1105 15:53:17.116483 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.116550 kubelet[3926]: E1105 15:53:17.116493 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.117652 kubelet[3926]: E1105 15:53:17.117619 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.117652 kubelet[3926]: W1105 15:53:17.117637 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.117652 kubelet[3926]: E1105 15:53:17.117653 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.117967 kubelet[3926]: E1105 15:53:17.117876 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.117967 kubelet[3926]: W1105 15:53:17.117883 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.117967 kubelet[3926]: E1105 15:53:17.117892 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.118073 kubelet[3926]: E1105 15:53:17.117994 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.118073 kubelet[3926]: W1105 15:53:17.118000 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.118073 kubelet[3926]: E1105 15:53:17.118007 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.118245 kubelet[3926]: E1105 15:53:17.118097 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.118245 kubelet[3926]: W1105 15:53:17.118102 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.118245 kubelet[3926]: E1105 15:53:17.118108 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.118245 kubelet[3926]: E1105 15:53:17.118192 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.118245 kubelet[3926]: W1105 15:53:17.118196 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.118245 kubelet[3926]: E1105 15:53:17.118201 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.119243 kubelet[3926]: E1105 15:53:17.119168 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.119243 kubelet[3926]: W1105 15:53:17.119183 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.119243 kubelet[3926]: E1105 15:53:17.119196 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.119623 kubelet[3926]: E1105 15:53:17.119374 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.119623 kubelet[3926]: W1105 15:53:17.119381 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.119623 kubelet[3926]: E1105 15:53:17.119390 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.119623 kubelet[3926]: E1105 15:53:17.119510 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.119623 kubelet[3926]: W1105 15:53:17.119516 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.119623 kubelet[3926]: E1105 15:53:17.119522 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.119623 kubelet[3926]: E1105 15:53:17.119625 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.120175 kubelet[3926]: W1105 15:53:17.119630 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.120175 kubelet[3926]: E1105 15:53:17.119637 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.120175 kubelet[3926]: E1105 15:53:17.119842 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.120175 kubelet[3926]: W1105 15:53:17.119849 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.120175 kubelet[3926]: E1105 15:53:17.119857 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.120175 kubelet[3926]: I1105 15:53:17.119888 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs9f\" (UniqueName: \"kubernetes.io/projected/0f742570-0c09-4ef6-8800-4cac3ba577e3-kube-api-access-lfs9f\") pod \"csi-node-driver-nllfx\" (UID: \"0f742570-0c09-4ef6-8800-4cac3ba577e3\") " pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:17.120175 kubelet[3926]: E1105 15:53:17.120166 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.120175 kubelet[3926]: W1105 15:53:17.120175 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.120175 kubelet[3926]: E1105 15:53:17.120184 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.120826 containerd[2489]: time="2025-11-05T15:53:17.119656193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkmk8,Uid:c91e0c17-7049-4eb2-b9e2-06c2f175aead,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:17.120870 kubelet[3926]: I1105 15:53:17.120210 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f742570-0c09-4ef6-8800-4cac3ba577e3-kubelet-dir\") pod \"csi-node-driver-nllfx\" (UID: \"0f742570-0c09-4ef6-8800-4cac3ba577e3\") " pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:17.120870 kubelet[3926]: E1105 15:53:17.120628 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.120870 kubelet[3926]: W1105 15:53:17.120639 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.120870 kubelet[3926]: E1105 15:53:17.120652 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.120870 kubelet[3926]: I1105 15:53:17.120675 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f742570-0c09-4ef6-8800-4cac3ba577e3-registration-dir\") pod \"csi-node-driver-nllfx\" (UID: \"0f742570-0c09-4ef6-8800-4cac3ba577e3\") " pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:17.121264 kubelet[3926]: E1105 15:53:17.120833 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.121264 kubelet[3926]: W1105 15:53:17.120911 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.121264 kubelet[3926]: E1105 15:53:17.120921 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.121264 kubelet[3926]: I1105 15:53:17.120969 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f742570-0c09-4ef6-8800-4cac3ba577e3-socket-dir\") pod \"csi-node-driver-nllfx\" (UID: \"0f742570-0c09-4ef6-8800-4cac3ba577e3\") " pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:17.121650 kubelet[3926]: E1105 15:53:17.121417 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.121650 kubelet[3926]: W1105 15:53:17.121426 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.121650 kubelet[3926]: E1105 15:53:17.121437 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.123212 kubelet[3926]: E1105 15:53:17.123148 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.123212 kubelet[3926]: W1105 15:53:17.123162 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.123212 kubelet[3926]: E1105 15:53:17.123176 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.123528 kubelet[3926]: E1105 15:53:17.123386 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.123528 kubelet[3926]: W1105 15:53:17.123394 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.123528 kubelet[3926]: E1105 15:53:17.123403 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.123668 kubelet[3926]: E1105 15:53:17.123642 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.123668 kubelet[3926]: W1105 15:53:17.123649 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.123668 kubelet[3926]: E1105 15:53:17.123658 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.123983 kubelet[3926]: E1105 15:53:17.123939 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.123983 kubelet[3926]: W1105 15:53:17.123949 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.123983 kubelet[3926]: E1105 15:53:17.123959 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.124967 kubelet[3926]: E1105 15:53:17.124850 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.124967 kubelet[3926]: W1105 15:53:17.124874 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.124967 kubelet[3926]: E1105 15:53:17.124888 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.125245 kubelet[3926]: E1105 15:53:17.125234 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.125405 kubelet[3926]: W1105 15:53:17.125321 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.125405 kubelet[3926]: E1105 15:53:17.125337 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.125405 kubelet[3926]: I1105 15:53:17.125367 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f742570-0c09-4ef6-8800-4cac3ba577e3-varrun\") pod \"csi-node-driver-nllfx\" (UID: \"0f742570-0c09-4ef6-8800-4cac3ba577e3\") " pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:17.125536 kubelet[3926]: E1105 15:53:17.125493 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.125536 kubelet[3926]: W1105 15:53:17.125502 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.125536 kubelet[3926]: E1105 15:53:17.125512 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.126176 kubelet[3926]: E1105 15:53:17.126153 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.126176 kubelet[3926]: W1105 15:53:17.126164 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.126176 kubelet[3926]: E1105 15:53:17.126174 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.126365 kubelet[3926]: E1105 15:53:17.126331 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.126365 kubelet[3926]: W1105 15:53:17.126339 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.126365 kubelet[3926]: E1105 15:53:17.126348 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.126535 kubelet[3926]: E1105 15:53:17.126490 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.126535 kubelet[3926]: W1105 15:53:17.126496 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.126535 kubelet[3926]: E1105 15:53:17.126504 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.142331 containerd[2489]: time="2025-11-05T15:53:17.141150810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d86c9fcb-2csh7,Uid:ca680b2d-8cb6-4508-85cb-e059d2d1ca25,Namespace:calico-system,Attempt:0,} returns sandbox id \"e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824\"" Nov 5 15:53:17.143693 containerd[2489]: time="2025-11-05T15:53:17.143660451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:53:17.177668 containerd[2489]: time="2025-11-05T15:53:17.176983090Z" level=info msg="connecting to shim d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631" address="unix:///run/containerd/s/ca2e0af5b3f547a278fcdc7319b1842701bb00e13ca5a44cd03563a8bc3edc6d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:17.204430 systemd[1]: Started cri-containerd-d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631.scope - libcontainer container d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631. Nov 5 15:53:17.226080 kubelet[3926]: E1105 15:53:17.226016 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.226209 kubelet[3926]: W1105 15:53:17.226196 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.226336 kubelet[3926]: E1105 15:53:17.226325 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.226637 kubelet[3926]: E1105 15:53:17.226629 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.226699 kubelet[3926]: W1105 15:53:17.226692 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.226746 kubelet[3926]: E1105 15:53:17.226739 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.227008 kubelet[3926]: E1105 15:53:17.227000 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.227083 kubelet[3926]: W1105 15:53:17.227028 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.227083 kubelet[3926]: E1105 15:53:17.227040 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.227368 kubelet[3926]: E1105 15:53:17.227342 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.227368 kubelet[3926]: W1105 15:53:17.227350 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.227368 kubelet[3926]: E1105 15:53:17.227359 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.227652 kubelet[3926]: E1105 15:53:17.227617 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.227652 kubelet[3926]: W1105 15:53:17.227625 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.227652 kubelet[3926]: E1105 15:53:17.227633 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.227932 kubelet[3926]: E1105 15:53:17.227906 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.227932 kubelet[3926]: W1105 15:53:17.227914 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.227932 kubelet[3926]: E1105 15:53:17.227922 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.228370 kubelet[3926]: E1105 15:53:17.228311 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.228370 kubelet[3926]: W1105 15:53:17.228328 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.228588 kubelet[3926]: E1105 15:53:17.228501 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.229095 kubelet[3926]: E1105 15:53:17.229079 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.229095 kubelet[3926]: W1105 15:53:17.229093 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.229184 kubelet[3926]: E1105 15:53:17.229105 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.229382 kubelet[3926]: E1105 15:53:17.229369 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.229382 kubelet[3926]: W1105 15:53:17.229380 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.229490 kubelet[3926]: E1105 15:53:17.229389 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.229618 kubelet[3926]: E1105 15:53:17.229607 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.229705 kubelet[3926]: W1105 15:53:17.229616 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.229734 kubelet[3926]: E1105 15:53:17.229711 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.230208 kubelet[3926]: E1105 15:53:17.230186 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.230208 kubelet[3926]: W1105 15:53:17.230202 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.230368 kubelet[3926]: E1105 15:53:17.230217 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.230600 kubelet[3926]: E1105 15:53:17.230586 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.230600 kubelet[3926]: W1105 15:53:17.230597 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.230670 kubelet[3926]: E1105 15:53:17.230608 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.230733 kubelet[3926]: E1105 15:53:17.230722 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.230733 kubelet[3926]: W1105 15:53:17.230730 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.230800 kubelet[3926]: E1105 15:53:17.230738 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.230864 kubelet[3926]: E1105 15:53:17.230857 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.230890 kubelet[3926]: W1105 15:53:17.230864 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.230890 kubelet[3926]: E1105 15:53:17.230870 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.230982 kubelet[3926]: E1105 15:53:17.230971 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.230982 kubelet[3926]: W1105 15:53:17.230978 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.231095 kubelet[3926]: E1105 15:53:17.230985 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.231168 kubelet[3926]: E1105 15:53:17.231095 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.231168 kubelet[3926]: W1105 15:53:17.231101 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.231168 kubelet[3926]: E1105 15:53:17.231107 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.231343 kubelet[3926]: E1105 15:53:17.231332 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.231343 kubelet[3926]: W1105 15:53:17.231341 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.231410 kubelet[3926]: E1105 15:53:17.231349 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.231503 kubelet[3926]: E1105 15:53:17.231488 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.231503 kubelet[3926]: W1105 15:53:17.231498 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.231563 kubelet[3926]: E1105 15:53:17.231507 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.231735 kubelet[3926]: E1105 15:53:17.231724 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.231735 kubelet[3926]: W1105 15:53:17.231733 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.231801 kubelet[3926]: E1105 15:53:17.231741 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.232860 kubelet[3926]: E1105 15:53:17.232787 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.232860 kubelet[3926]: W1105 15:53:17.232807 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.232860 kubelet[3926]: E1105 15:53:17.232821 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.233243 kubelet[3926]: E1105 15:53:17.233225 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.233243 kubelet[3926]: W1105 15:53:17.233241 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.233410 kubelet[3926]: E1105 15:53:17.233253 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.233652 containerd[2489]: time="2025-11-05T15:53:17.233626784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkmk8,Uid:c91e0c17-7049-4eb2-b9e2-06c2f175aead,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\"" Nov 5 15:53:17.234046 kubelet[3926]: E1105 15:53:17.234031 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.234046 kubelet[3926]: W1105 15:53:17.234046 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.234306 kubelet[3926]: E1105 15:53:17.234058 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.234306 kubelet[3926]: E1105 15:53:17.234216 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.234306 kubelet[3926]: W1105 15:53:17.234222 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.234306 kubelet[3926]: E1105 15:53:17.234230 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.234686 kubelet[3926]: E1105 15:53:17.234642 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.234686 kubelet[3926]: W1105 15:53:17.234653 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.234686 kubelet[3926]: E1105 15:53:17.234663 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.235429 kubelet[3926]: E1105 15:53:17.235332 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.235429 kubelet[3926]: W1105 15:53:17.235342 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.235429 kubelet[3926]: E1105 15:53:17.235353 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:17.239830 kubelet[3926]: E1105 15:53:17.239813 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:17.239830 kubelet[3926]: W1105 15:53:17.239825 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:17.239914 kubelet[3926]: E1105 15:53:17.239840 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:18.656337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324683520.mount: Deactivated successfully. Nov 5 15:53:19.032321 kubelet[3926]: E1105 15:53:19.032156 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:19.552979 containerd[2489]: time="2025-11-05T15:53:19.552909080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:19.555333 containerd[2489]: time="2025-11-05T15:53:19.555303029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:53:19.557919 containerd[2489]: time="2025-11-05T15:53:19.557882316Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:19.561535 containerd[2489]: time="2025-11-05T15:53:19.561488173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:19.561965 containerd[2489]: time="2025-11-05T15:53:19.561811481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.418121103s" Nov 5 15:53:19.561965 containerd[2489]: time="2025-11-05T15:53:19.561838485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:53:19.563985 containerd[2489]: time="2025-11-05T15:53:19.563117321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:53:19.580430 containerd[2489]: time="2025-11-05T15:53:19.580400506Z" level=info msg="CreateContainer within sandbox \"e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:53:19.599303 containerd[2489]: time="2025-11-05T15:53:19.598403312Z" level=info msg="Container bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:19.615222 containerd[2489]: time="2025-11-05T15:53:19.615196249Z" level=info msg="CreateContainer within sandbox \"e73339de37ae19bcdcd00e71cf862edb520adcff91d44ebd12de81c568187824\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320\"" Nov 5 15:53:19.616824 containerd[2489]: time="2025-11-05T15:53:19.616621295Z" level=info msg="StartContainer for \"bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320\"" Nov 5 15:53:19.620716 containerd[2489]: time="2025-11-05T15:53:19.620680121Z" level=info msg="connecting to shim bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320" address="unix:///run/containerd/s/7fa7b8dea59319aa4e35110ff6af063a126978534204e7f680a74b33d09cbcfa" protocol=ttrpc version=3 Nov 5 15:53:19.641460 systemd[1]: Started cri-containerd-bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320.scope - libcontainer container bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320. Nov 5 15:53:19.693011 containerd[2489]: time="2025-11-05T15:53:19.692962021Z" level=info msg="StartContainer for \"bfc6a5158aef56ab223b1e509c58286abd89892141871696aae2337dee1c9320\" returns successfully" Nov 5 15:53:20.136616 kubelet[3926]: E1105 15:53:20.136585 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.136616 kubelet[3926]: W1105 15:53:20.136612 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.137341 kubelet[3926]: E1105 15:53:20.136635 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.137341 kubelet[3926]: E1105 15:53:20.136755 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.137341 kubelet[3926]: W1105 15:53:20.136761 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.137341 kubelet[3926]: E1105 15:53:20.136769 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.137341 kubelet[3926]: E1105 15:53:20.136864 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.137341 kubelet[3926]: W1105 15:53:20.136870 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.137341 kubelet[3926]: E1105 15:53:20.136877 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137366 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139134 kubelet[3926]: W1105 15:53:20.137379 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137394 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137536 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139134 kubelet[3926]: W1105 15:53:20.137541 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137548 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137728 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139134 kubelet[3926]: W1105 15:53:20.137736 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137745 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139134 kubelet[3926]: E1105 15:53:20.137856 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139383 kubelet[3926]: W1105 15:53:20.137861 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.137868 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.137993 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139383 kubelet[3926]: W1105 15:53:20.138000 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.138007 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.138136 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139383 kubelet[3926]: W1105 15:53:20.138141 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.138149 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139383 kubelet[3926]: E1105 15:53:20.138243 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139383 kubelet[3926]: W1105 15:53:20.138249 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138255 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138357 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139562 kubelet[3926]: W1105 15:53:20.138376 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138382 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138481 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139562 kubelet[3926]: W1105 15:53:20.138486 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138492 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138594 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139562 kubelet[3926]: W1105 15:53:20.138612 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139562 kubelet[3926]: E1105 15:53:20.138628 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139747 kubelet[3926]: E1105 15:53:20.138739 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139747 kubelet[3926]: W1105 15:53:20.138744 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139747 kubelet[3926]: E1105 15:53:20.138751 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.139747 kubelet[3926]: E1105 15:53:20.138845 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.139747 kubelet[3926]: W1105 15:53:20.138849 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.139747 kubelet[3926]: E1105 15:53:20.138855 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.146162 kubelet[3926]: E1105 15:53:20.146133 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.146162 kubelet[3926]: W1105 15:53:20.146153 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.146162 kubelet[3926]: E1105 15:53:20.146166 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.146395 kubelet[3926]: E1105 15:53:20.146385 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.146434 kubelet[3926]: W1105 15:53:20.146395 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.146434 kubelet[3926]: E1105 15:53:20.146405 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.146608 kubelet[3926]: E1105 15:53:20.146598 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.146640 kubelet[3926]: W1105 15:53:20.146609 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.146640 kubelet[3926]: E1105 15:53:20.146617 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.146808 kubelet[3926]: E1105 15:53:20.146797 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.146846 kubelet[3926]: W1105 15:53:20.146823 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.146846 kubelet[3926]: E1105 15:53:20.146831 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.147062 kubelet[3926]: E1105 15:53:20.146982 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.147062 kubelet[3926]: W1105 15:53:20.146992 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.147062 kubelet[3926]: E1105 15:53:20.146999 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.147153 kubelet[3926]: E1105 15:53:20.147117 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.147153 kubelet[3926]: W1105 15:53:20.147148 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.147201 kubelet[3926]: E1105 15:53:20.147156 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.147372 kubelet[3926]: E1105 15:53:20.147324 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.147372 kubelet[3926]: W1105 15:53:20.147331 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.147372 kubelet[3926]: E1105 15:53:20.147337 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.148131 kubelet[3926]: E1105 15:53:20.147829 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.148131 kubelet[3926]: W1105 15:53:20.147843 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.148131 kubelet[3926]: E1105 15:53:20.147854 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.148131 kubelet[3926]: E1105 15:53:20.148081 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.148131 kubelet[3926]: W1105 15:53:20.148087 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.148131 kubelet[3926]: E1105 15:53:20.148096 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.148389 kubelet[3926]: E1105 15:53:20.148357 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.148389 kubelet[3926]: W1105 15:53:20.148365 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.148389 kubelet[3926]: E1105 15:53:20.148374 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.148625 kubelet[3926]: E1105 15:53:20.148609 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.148625 kubelet[3926]: W1105 15:53:20.148622 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.148700 kubelet[3926]: E1105 15:53:20.148632 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.149173 kubelet[3926]: E1105 15:53:20.149150 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.149173 kubelet[3926]: W1105 15:53:20.149169 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.149249 kubelet[3926]: E1105 15:53:20.149180 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.149353 kubelet[3926]: E1105 15:53:20.149341 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.149353 kubelet[3926]: W1105 15:53:20.149350 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.149358 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.149699 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.150469 kubelet[3926]: W1105 15:53:20.149709 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.149720 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.149894 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.150469 kubelet[3926]: W1105 15:53:20.149900 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.149909 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.150469 kubelet[3926]: E1105 15:53:20.150460 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.150469 kubelet[3926]: W1105 15:53:20.150470 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.150713 kubelet[3926]: E1105 15:53:20.150482 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.151230 kubelet[3926]: E1105 15:53:20.151198 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.151230 kubelet[3926]: W1105 15:53:20.151225 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.151363 kubelet[3926]: E1105 15:53:20.151237 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:20.151747 kubelet[3926]: E1105 15:53:20.151731 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:20.151747 kubelet[3926]: W1105 15:53:20.151746 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:20.152200 kubelet[3926]: E1105 15:53:20.151759 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.032676 kubelet[3926]: E1105 15:53:21.032639 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:21.111416 kubelet[3926]: I1105 15:53:21.111388 3926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:53:21.145725 kubelet[3926]: E1105 15:53:21.145700 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.145725 kubelet[3926]: W1105 15:53:21.145718 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.145737 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.145848 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146221 kubelet[3926]: W1105 15:53:21.145854 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.145862 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.145953 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146221 kubelet[3926]: W1105 15:53:21.145959 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.145966 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.146053 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146221 kubelet[3926]: W1105 15:53:21.146058 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146221 kubelet[3926]: E1105 15:53:21.146064 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146152 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146670 kubelet[3926]: W1105 15:53:21.146160 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146168 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146312 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146670 kubelet[3926]: W1105 15:53:21.146318 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146327 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146469 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146670 kubelet[3926]: W1105 15:53:21.146475 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146482 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146670 kubelet[3926]: E1105 15:53:21.146578 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146967 kubelet[3926]: W1105 15:53:21.146586 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146593 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146728 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146967 kubelet[3926]: W1105 15:53:21.146735 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146741 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146843 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146967 kubelet[3926]: W1105 15:53:21.146848 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146854 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.146967 kubelet[3926]: E1105 15:53:21.146940 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.146967 kubelet[3926]: W1105 15:53:21.146945 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.146950 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147037 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.147386 kubelet[3926]: W1105 15:53:21.147041 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147047 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147138 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.147386 kubelet[3926]: W1105 15:53:21.147143 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147149 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147294 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.147386 kubelet[3926]: W1105 15:53:21.147300 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.147386 kubelet[3926]: E1105 15:53:21.147308 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.147677 kubelet[3926]: E1105 15:53:21.147404 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.147677 kubelet[3926]: W1105 15:53:21.147409 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.147677 kubelet[3926]: E1105 15:53:21.147436 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.152848 kubelet[3926]: E1105 15:53:21.152785 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.152848 kubelet[3926]: W1105 15:53:21.152804 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.152848 kubelet[3926]: E1105 15:53:21.152820 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.153591 kubelet[3926]: E1105 15:53:21.153553 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.153591 kubelet[3926]: W1105 15:53:21.153567 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.153760 kubelet[3926]: E1105 15:53:21.153687 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.153935 kubelet[3926]: E1105 15:53:21.153911 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.153935 kubelet[3926]: W1105 15:53:21.153921 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.153935 kubelet[3926]: E1105 15:53:21.153932 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.154379 kubelet[3926]: E1105 15:53:21.154167 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.154379 kubelet[3926]: W1105 15:53:21.154175 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.154379 kubelet[3926]: E1105 15:53:21.154187 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.154379 kubelet[3926]: E1105 15:53:21.154350 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.154379 kubelet[3926]: W1105 15:53:21.154356 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.154379 kubelet[3926]: E1105 15:53:21.154365 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.154823 kubelet[3926]: E1105 15:53:21.154518 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.154823 kubelet[3926]: W1105 15:53:21.154524 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.154823 kubelet[3926]: E1105 15:53:21.154533 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.154823 kubelet[3926]: E1105 15:53:21.154761 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.154823 kubelet[3926]: W1105 15:53:21.154768 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.154823 kubelet[3926]: E1105 15:53:21.154776 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.154989 kubelet[3926]: E1105 15:53:21.154899 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.154989 kubelet[3926]: W1105 15:53:21.154904 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.154989 kubelet[3926]: E1105 15:53:21.154911 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.155256 kubelet[3926]: E1105 15:53:21.155138 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.155256 kubelet[3926]: W1105 15:53:21.155148 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.155256 kubelet[3926]: E1105 15:53:21.155159 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.155442 kubelet[3926]: E1105 15:53:21.155424 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.155442 kubelet[3926]: W1105 15:53:21.155438 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.155514 kubelet[3926]: E1105 15:53:21.155446 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.155775 kubelet[3926]: E1105 15:53:21.155663 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.155775 kubelet[3926]: W1105 15:53:21.155675 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.155775 kubelet[3926]: E1105 15:53:21.155685 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156122 kubelet[3926]: E1105 15:53:21.155934 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156122 kubelet[3926]: W1105 15:53:21.155942 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156122 kubelet[3926]: E1105 15:53:21.155950 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156210 kubelet[3926]: E1105 15:53:21.156152 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156210 kubelet[3926]: W1105 15:53:21.156161 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156210 kubelet[3926]: E1105 15:53:21.156169 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156312 kubelet[3926]: E1105 15:53:21.156307 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156337 kubelet[3926]: W1105 15:53:21.156315 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156337 kubelet[3926]: E1105 15:53:21.156322 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156408 kubelet[3926]: E1105 15:53:21.156405 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156454 kubelet[3926]: W1105 15:53:21.156410 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156454 kubelet[3926]: E1105 15:53:21.156416 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156632 kubelet[3926]: E1105 15:53:21.156623 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156664 kubelet[3926]: W1105 15:53:21.156632 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156664 kubelet[3926]: E1105 15:53:21.156641 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.156877 kubelet[3926]: E1105 15:53:21.156866 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.156877 kubelet[3926]: W1105 15:53:21.156875 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.156936 kubelet[3926]: E1105 15:53:21.156883 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.157020 kubelet[3926]: E1105 15:53:21.156995 3926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:53:21.157020 kubelet[3926]: W1105 15:53:21.157016 3926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:53:21.157065 kubelet[3926]: E1105 15:53:21.157023 3926 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:53:21.331756 containerd[2489]: time="2025-11-05T15:53:21.331716547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:21.335691 containerd[2489]: time="2025-11-05T15:53:21.335651297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:53:21.340433 containerd[2489]: time="2025-11-05T15:53:21.340375666Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:21.346206 containerd[2489]: time="2025-11-05T15:53:21.345749671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:21.346206 containerd[2489]: time="2025-11-05T15:53:21.346101587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.781904219s" Nov 5 15:53:21.346206 containerd[2489]: time="2025-11-05T15:53:21.346128762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:53:21.353273 containerd[2489]: time="2025-11-05T15:53:21.353249149Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:53:21.374249 containerd[2489]: time="2025-11-05T15:53:21.374221627Z" level=info msg="Container bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:21.391431 containerd[2489]: time="2025-11-05T15:53:21.391407130Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\"" Nov 5 15:53:21.392324 containerd[2489]: time="2025-11-05T15:53:21.391824476Z" level=info msg="StartContainer for \"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\"" Nov 5 15:53:21.393401 containerd[2489]: time="2025-11-05T15:53:21.393372814Z" level=info msg="connecting to shim bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3" address="unix:///run/containerd/s/ca2e0af5b3f547a278fcdc7319b1842701bb00e13ca5a44cd03563a8bc3edc6d" protocol=ttrpc version=3 Nov 5 15:53:21.416447 systemd[1]: Started cri-containerd-bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3.scope - libcontainer container bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3. Nov 5 15:53:21.453715 containerd[2489]: time="2025-11-05T15:53:21.452753421Z" level=info msg="StartContainer for \"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\" returns successfully" Nov 5 15:53:21.455631 systemd[1]: cri-containerd-bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3.scope: Deactivated successfully. Nov 5 15:53:21.459609 containerd[2489]: time="2025-11-05T15:53:21.459559550Z" level=info msg="received exit event container_id:\"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\" id:\"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\" pid:4635 exited_at:{seconds:1762358001 nanos:458963744}" Nov 5 15:53:21.459609 containerd[2489]: time="2025-11-05T15:53:21.459588284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\" id:\"bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3\" pid:4635 exited_at:{seconds:1762358001 nanos:458963744}" Nov 5 15:53:21.474371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb2df9345b5a05808eb63473f9c4b559e2fc3a99dd221a4cc06658e783d482a3-rootfs.mount: Deactivated successfully. Nov 5 15:53:22.131025 kubelet[3926]: I1105 15:53:22.130556 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76d86c9fcb-2csh7" podStartSLOduration=3.7113677689999998 podStartE2EDuration="6.130541381s" podCreationTimestamp="2025-11-05 15:53:16 +0000 UTC" firstStartedPulling="2025-11-05 15:53:17.143371321 +0000 UTC m=+19.224627968" lastFinishedPulling="2025-11-05 15:53:19.562544951 +0000 UTC m=+21.643801580" observedRunningTime="2025-11-05 15:53:20.121410597 +0000 UTC m=+22.202667256" watchObservedRunningTime="2025-11-05 15:53:22.130541381 +0000 UTC m=+24.211798039" Nov 5 15:53:23.031861 kubelet[3926]: E1105 15:53:23.031815 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:24.120651 containerd[2489]: time="2025-11-05T15:53:24.120588893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:53:25.032028 kubelet[3926]: E1105 15:53:25.031968 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:27.032630 kubelet[3926]: E1105 15:53:27.032579 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:28.806531 containerd[2489]: time="2025-11-05T15:53:28.806481013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:28.808828 containerd[2489]: time="2025-11-05T15:53:28.808789289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:53:28.811603 containerd[2489]: time="2025-11-05T15:53:28.811559500Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:28.815406 containerd[2489]: time="2025-11-05T15:53:28.815359585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:28.816058 containerd[2489]: time="2025-11-05T15:53:28.815757100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.695107573s" Nov 5 15:53:28.816058 containerd[2489]: time="2025-11-05T15:53:28.815787582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:53:28.823100 containerd[2489]: time="2025-11-05T15:53:28.823067659Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:53:28.840436 containerd[2489]: time="2025-11-05T15:53:28.840363942Z" level=info msg="Container 089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:28.861790 containerd[2489]: time="2025-11-05T15:53:28.861748715Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\"" Nov 5 15:53:28.862572 containerd[2489]: time="2025-11-05T15:53:28.862524036Z" level=info msg="StartContainer for \"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\"" Nov 5 15:53:28.864083 containerd[2489]: time="2025-11-05T15:53:28.864048883Z" level=info msg="connecting to shim 089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789" address="unix:///run/containerd/s/ca2e0af5b3f547a278fcdc7319b1842701bb00e13ca5a44cd03563a8bc3edc6d" protocol=ttrpc version=3 Nov 5 15:53:28.886439 systemd[1]: Started cri-containerd-089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789.scope - libcontainer container 089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789. Nov 5 15:53:28.924047 containerd[2489]: time="2025-11-05T15:53:28.923943475Z" level=info msg="StartContainer for \"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\" returns successfully" Nov 5 15:53:29.032721 kubelet[3926]: E1105 15:53:29.032665 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:31.031708 kubelet[3926]: E1105 15:53:31.031667 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:33.031706 kubelet[3926]: E1105 15:53:33.031654 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:33.319479 kubelet[3926]: I1105 15:53:33.319145 3926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:53:35.032489 kubelet[3926]: E1105 15:53:35.032428 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:35.670207 systemd[1]: cri-containerd-089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789.scope: Deactivated successfully. Nov 5 15:53:35.670757 systemd[1]: cri-containerd-089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789.scope: Consumed 437ms CPU time, 194.3M memory peak, 171.3M written to disk. Nov 5 15:53:35.671478 containerd[2489]: time="2025-11-05T15:53:35.671385487Z" level=info msg="received exit event container_id:\"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\" id:\"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\" pid:4692 exited_at:{seconds:1762358015 nanos:670006287}" Nov 5 15:53:35.671915 containerd[2489]: time="2025-11-05T15:53:35.671765061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\" id:\"089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789\" pid:4692 exited_at:{seconds:1762358015 nanos:670006287}" Nov 5 15:53:35.692356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-089c112b86e0d511d432fe46d2d30796e55f247d2e346bbafc81c5ec99644789-rootfs.mount: Deactivated successfully. Nov 5 15:53:35.731933 kubelet[3926]: I1105 15:53:35.731908 3926 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:53:36.718269 systemd[1]: Created slice kubepods-burstable-pod53ece130_7146_4442_9e4a_b716be345aed.slice - libcontainer container kubepods-burstable-pod53ece130_7146_4442_9e4a_b716be345aed.slice. Nov 5 15:53:36.723727 systemd[1]: Created slice kubepods-besteffort-pod0f742570_0c09_4ef6_8800_4cac3ba577e3.slice - libcontainer container kubepods-besteffort-pod0f742570_0c09_4ef6_8800_4cac3ba577e3.slice. Nov 5 15:53:36.726513 containerd[2489]: time="2025-11-05T15:53:36.726459032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nllfx,Uid:0f742570-0c09-4ef6-8800-4cac3ba577e3,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:36.742739 kubelet[3926]: I1105 15:53:36.742709 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53ece130-7146-4442-9e4a-b716be345aed-config-volume\") pod \"coredns-674b8bbfcf-6xhgc\" (UID: \"53ece130-7146-4442-9e4a-b716be345aed\") " pod="kube-system/coredns-674b8bbfcf-6xhgc" Nov 5 15:53:36.742739 kubelet[3926]: I1105 15:53:36.742745 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dg4\" (UniqueName: \"kubernetes.io/projected/53ece130-7146-4442-9e4a-b716be345aed-kube-api-access-77dg4\") pod \"coredns-674b8bbfcf-6xhgc\" (UID: \"53ece130-7146-4442-9e4a-b716be345aed\") " pod="kube-system/coredns-674b8bbfcf-6xhgc" Nov 5 15:53:37.022517 containerd[2489]: time="2025-11-05T15:53:37.022403748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6xhgc,Uid:53ece130-7146-4442-9e4a-b716be345aed,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:38.913147 systemd[1]: Created slice kubepods-burstable-pod73bb0ab3_5e34_4e2f_bfe9_ee75e359909c.slice - libcontainer container kubepods-burstable-pod73bb0ab3_5e34_4e2f_bfe9_ee75e359909c.slice. Nov 5 15:53:38.954138 kubelet[3926]: I1105 15:53:38.954025 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj28b\" (UniqueName: \"kubernetes.io/projected/73bb0ab3-5e34-4e2f-bfe9-ee75e359909c-kube-api-access-jj28b\") pod \"coredns-674b8bbfcf-xndbm\" (UID: \"73bb0ab3-5e34-4e2f-bfe9-ee75e359909c\") " pod="kube-system/coredns-674b8bbfcf-xndbm" Nov 5 15:53:38.954138 kubelet[3926]: I1105 15:53:38.954095 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73bb0ab3-5e34-4e2f-bfe9-ee75e359909c-config-volume\") pod \"coredns-674b8bbfcf-xndbm\" (UID: \"73bb0ab3-5e34-4e2f-bfe9-ee75e359909c\") " pod="kube-system/coredns-674b8bbfcf-xndbm" Nov 5 15:53:39.217202 containerd[2489]: time="2025-11-05T15:53:39.217084593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xndbm,Uid:73bb0ab3-5e34-4e2f-bfe9-ee75e359909c,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:40.160689 kubelet[3926]: I1105 15:53:40.160647 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44ab8120-7220-47ae-93cc-8b7e8505e744-tigera-ca-bundle\") pod \"calico-kube-controllers-7b6458fdcf-zcgg8\" (UID: \"44ab8120-7220-47ae-93cc-8b7e8505e744\") " pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" Nov 5 15:53:40.160689 kubelet[3926]: I1105 15:53:40.160682 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwpt\" (UniqueName: \"kubernetes.io/projected/44ab8120-7220-47ae-93cc-8b7e8505e744-kube-api-access-6hwpt\") pod \"calico-kube-controllers-7b6458fdcf-zcgg8\" (UID: \"44ab8120-7220-47ae-93cc-8b7e8505e744\") " pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" Nov 5 15:53:40.264475 kubelet[3926]: I1105 15:53:40.261607 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c790f420-0686-46b7-ac9a-3d5362dc937f-calico-apiserver-certs\") pod \"calico-apiserver-6445c55d69-dp5nk\" (UID: \"c790f420-0686-46b7-ac9a-3d5362dc937f\") " pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" Nov 5 15:53:40.264475 kubelet[3926]: I1105 15:53:40.261644 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgxr\" (UniqueName: \"kubernetes.io/projected/c790f420-0686-46b7-ac9a-3d5362dc937f-kube-api-access-gfgxr\") pod \"calico-apiserver-6445c55d69-dp5nk\" (UID: \"c790f420-0686-46b7-ac9a-3d5362dc937f\") " pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" Nov 5 15:53:40.287034 systemd[1]: Created slice kubepods-besteffort-podc790f420_0686_46b7_ac9a_3d5362dc937f.slice - libcontainer container kubepods-besteffort-podc790f420_0686_46b7_ac9a_3d5362dc937f.slice. Nov 5 15:53:40.317811 systemd[1]: Created slice kubepods-besteffort-pod44ab8120_7220_47ae_93cc_8b7e8505e744.slice - libcontainer container kubepods-besteffort-pod44ab8120_7220_47ae_93cc_8b7e8505e744.slice. Nov 5 15:53:40.325905 containerd[2489]: time="2025-11-05T15:53:40.325850345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6458fdcf-zcgg8,Uid:44ab8120-7220-47ae-93cc-8b7e8505e744,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:40.332819 systemd[1]: Created slice kubepods-besteffort-pod28e6161d_8a43_4410_bad8_024e6e11b082.slice - libcontainer container kubepods-besteffort-pod28e6161d_8a43_4410_bad8_024e6e11b082.slice. Nov 5 15:53:40.340671 systemd[1]: Created slice kubepods-besteffort-pod6345634e_739d_4c50_8a09_88a959b92cba.slice - libcontainer container kubepods-besteffort-pod6345634e_739d_4c50_8a09_88a959b92cba.slice. Nov 5 15:53:40.359903 systemd[1]: Created slice kubepods-besteffort-pod65694c8b_c2eb_4f3f_8724_f2d844e7483e.slice - libcontainer container kubepods-besteffort-pod65694c8b_c2eb_4f3f_8724_f2d844e7483e.slice. Nov 5 15:53:40.362328 kubelet[3926]: I1105 15:53:40.362300 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/65694c8b-c2eb-4f3f-8724-f2d844e7483e-calico-apiserver-certs\") pod \"calico-apiserver-6445c55d69-kdh6x\" (UID: \"65694c8b-c2eb-4f3f-8724-f2d844e7483e\") " pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" Nov 5 15:53:40.362419 kubelet[3926]: I1105 15:53:40.362369 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6345634e-739d-4c50-8a09-88a959b92cba-config\") pod \"goldmane-666569f655-xb2bl\" (UID: \"6345634e-739d-4c50-8a09-88a959b92cba\") " pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.362419 kubelet[3926]: I1105 15:53:40.362389 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6345634e-739d-4c50-8a09-88a959b92cba-goldmane-key-pair\") pod \"goldmane-666569f655-xb2bl\" (UID: \"6345634e-739d-4c50-8a09-88a959b92cba\") " pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.362419 kubelet[3926]: I1105 15:53:40.362408 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfml\" (UniqueName: \"kubernetes.io/projected/28e6161d-8a43-4410-bad8-024e6e11b082-kube-api-access-wlfml\") pod \"whisker-954b7cf6b-cdwl5\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " pod="calico-system/whisker-954b7cf6b-cdwl5" Nov 5 15:53:40.362497 kubelet[3926]: I1105 15:53:40.362431 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6345634e-739d-4c50-8a09-88a959b92cba-goldmane-ca-bundle\") pod \"goldmane-666569f655-xb2bl\" (UID: \"6345634e-739d-4c50-8a09-88a959b92cba\") " pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.362497 kubelet[3926]: I1105 15:53:40.362454 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55nk\" (UniqueName: \"kubernetes.io/projected/6345634e-739d-4c50-8a09-88a959b92cba-kube-api-access-b55nk\") pod \"goldmane-666569f655-xb2bl\" (UID: \"6345634e-739d-4c50-8a09-88a959b92cba\") " pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.362497 kubelet[3926]: I1105 15:53:40.362492 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-backend-key-pair\") pod \"whisker-954b7cf6b-cdwl5\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " pod="calico-system/whisker-954b7cf6b-cdwl5" Nov 5 15:53:40.362572 kubelet[3926]: I1105 15:53:40.362535 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbpn\" (UniqueName: \"kubernetes.io/projected/65694c8b-c2eb-4f3f-8724-f2d844e7483e-kube-api-access-vcbpn\") pod \"calico-apiserver-6445c55d69-kdh6x\" (UID: \"65694c8b-c2eb-4f3f-8724-f2d844e7483e\") " pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" Nov 5 15:53:40.362572 kubelet[3926]: I1105 15:53:40.362554 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-ca-bundle\") pod \"whisker-954b7cf6b-cdwl5\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " pod="calico-system/whisker-954b7cf6b-cdwl5" Nov 5 15:53:40.434049 containerd[2489]: time="2025-11-05T15:53:40.433651508Z" level=error msg="Failed to destroy network for sandbox \"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.441522 containerd[2489]: time="2025-11-05T15:53:40.441477658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6xhgc,Uid:53ece130-7146-4442-9e4a-b716be345aed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.441868 kubelet[3926]: E1105 15:53:40.441832 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.442121 kubelet[3926]: E1105 15:53:40.442098 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6xhgc" Nov 5 15:53:40.442169 kubelet[3926]: E1105 15:53:40.442145 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6xhgc" Nov 5 15:53:40.442232 kubelet[3926]: E1105 15:53:40.442200 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6xhgc_kube-system(53ece130-7146-4442-9e4a-b716be345aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6xhgc_kube-system(53ece130-7146-4442-9e4a-b716be345aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"361f52b594099cb9dea11d26142c6c2caeaf4b7fa38787a5040a02d2b971fe5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6xhgc" podUID="53ece130-7146-4442-9e4a-b716be345aed" Nov 5 15:53:40.451920 containerd[2489]: time="2025-11-05T15:53:40.451797393Z" level=error msg="Failed to destroy network for sandbox \"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.455476 containerd[2489]: time="2025-11-05T15:53:40.455436430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nllfx,Uid:0f742570-0c09-4ef6-8800-4cac3ba577e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.455664 kubelet[3926]: E1105 15:53:40.455636 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.456732 kubelet[3926]: E1105 15:53:40.455686 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:40.456829 kubelet[3926]: E1105 15:53:40.456746 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nllfx" Nov 5 15:53:40.456861 kubelet[3926]: E1105 15:53:40.456825 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7b2e9507e65c2548495dee27eb56c4c07645fac9fb2cb0b27fa9f234041af8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:40.484823 containerd[2489]: time="2025-11-05T15:53:40.483668722Z" level=error msg="Failed to destroy network for sandbox \"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.487461 containerd[2489]: time="2025-11-05T15:53:40.487082075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xndbm,Uid:73bb0ab3-5e34-4e2f-bfe9-ee75e359909c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.487753 kubelet[3926]: E1105 15:53:40.487724 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.487911 kubelet[3926]: E1105 15:53:40.487831 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xndbm" Nov 5 15:53:40.487911 kubelet[3926]: E1105 15:53:40.487857 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xndbm" Nov 5 15:53:40.488060 kubelet[3926]: E1105 15:53:40.488029 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xndbm_kube-system(73bb0ab3-5e34-4e2f-bfe9-ee75e359909c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xndbm_kube-system(73bb0ab3-5e34-4e2f-bfe9-ee75e359909c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8380fb812a12ca70d6324bf54117eda372e9540c47cea5760e53cbe3d9123ba0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xndbm" podUID="73bb0ab3-5e34-4e2f-bfe9-ee75e359909c" Nov 5 15:53:40.489474 containerd[2489]: time="2025-11-05T15:53:40.489440659Z" level=error msg="Failed to destroy network for sandbox \"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.492726 containerd[2489]: time="2025-11-05T15:53:40.492691446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6458fdcf-zcgg8,Uid:44ab8120-7220-47ae-93cc-8b7e8505e744,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.492872 kubelet[3926]: E1105 15:53:40.492847 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.492920 kubelet[3926]: E1105 15:53:40.492904 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" Nov 5 15:53:40.492963 kubelet[3926]: E1105 15:53:40.492928 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" Nov 5 15:53:40.493046 kubelet[3926]: E1105 15:53:40.492986 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82392f468db2cf64ba83deca4f1333f64a6052cf4abc882b785bb420e776eda9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:53:40.602907 containerd[2489]: time="2025-11-05T15:53:40.602868039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-dp5nk,Uid:c790f420-0686-46b7-ac9a-3d5362dc937f,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:40.636464 containerd[2489]: time="2025-11-05T15:53:40.636403341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-954b7cf6b-cdwl5,Uid:28e6161d-8a43-4410-bad8-024e6e11b082,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:40.654783 containerd[2489]: time="2025-11-05T15:53:40.654577640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xb2bl,Uid:6345634e-739d-4c50-8a09-88a959b92cba,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:40.655145 containerd[2489]: time="2025-11-05T15:53:40.655017672Z" level=error msg="Failed to destroy network for sandbox \"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.662819 containerd[2489]: time="2025-11-05T15:53:40.662781288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-kdh6x,Uid:65694c8b-c2eb-4f3f-8724-f2d844e7483e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:40.664230 containerd[2489]: time="2025-11-05T15:53:40.664140175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-dp5nk,Uid:c790f420-0686-46b7-ac9a-3d5362dc937f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.664623 kubelet[3926]: E1105 15:53:40.664571 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.664782 kubelet[3926]: E1105 15:53:40.664631 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" Nov 5 15:53:40.664782 kubelet[3926]: E1105 15:53:40.664662 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" Nov 5 15:53:40.664782 kubelet[3926]: E1105 15:53:40.664711 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c98ba1e64d916a23424e3a79120676b80d3007e57bdc64594fdd77ba3fabb54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:53:40.718966 containerd[2489]: time="2025-11-05T15:53:40.718792538Z" level=error msg="Failed to destroy network for sandbox \"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.722970 containerd[2489]: time="2025-11-05T15:53:40.722930528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-954b7cf6b-cdwl5,Uid:28e6161d-8a43-4410-bad8-024e6e11b082,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.723436 kubelet[3926]: E1105 15:53:40.723400 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.723741 kubelet[3926]: E1105 15:53:40.723600 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-954b7cf6b-cdwl5" Nov 5 15:53:40.723741 kubelet[3926]: E1105 15:53:40.723631 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-954b7cf6b-cdwl5" Nov 5 15:53:40.723867 kubelet[3926]: E1105 15:53:40.723845 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-954b7cf6b-cdwl5_calico-system(28e6161d-8a43-4410-bad8-024e6e11b082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-954b7cf6b-cdwl5_calico-system(28e6161d-8a43-4410-bad8-024e6e11b082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8367d40a5ccb026f7412c3350c56621a50c7b0b9ef7d6d07552db1cc6fa4c39f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-954b7cf6b-cdwl5" podUID="28e6161d-8a43-4410-bad8-024e6e11b082" Nov 5 15:53:40.736632 containerd[2489]: time="2025-11-05T15:53:40.736593165Z" level=error msg="Failed to destroy network for sandbox \"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.743764 containerd[2489]: time="2025-11-05T15:53:40.743731208Z" level=error msg="Failed to destroy network for sandbox \"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.749650 containerd[2489]: time="2025-11-05T15:53:40.749560463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-kdh6x,Uid:65694c8b-c2eb-4f3f-8724-f2d844e7483e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.750095 kubelet[3926]: E1105 15:53:40.749743 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.750095 kubelet[3926]: E1105 15:53:40.749783 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" Nov 5 15:53:40.750095 kubelet[3926]: E1105 15:53:40.749806 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" Nov 5 15:53:40.750205 kubelet[3926]: E1105 15:53:40.749854 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce1d18a1898e8106d38795f64718bbf5615ebb3990e2858d14bf30357a1b65e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:53:40.752197 containerd[2489]: time="2025-11-05T15:53:40.752167000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xb2bl,Uid:6345634e-739d-4c50-8a09-88a959b92cba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.752398 kubelet[3926]: E1105 15:53:40.752326 3926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:53:40.752398 kubelet[3926]: E1105 15:53:40.752370 3926 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.752492 kubelet[3926]: E1105 15:53:40.752391 3926 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xb2bl" Nov 5 15:53:40.752492 kubelet[3926]: E1105 15:53:40.752450 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6369ccd443e95f4431779747a4e7f49a9ca289ac013a23b4de8b66d1aca55e55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:53:41.080301 systemd[1]: run-netns-cni\x2d155c2c84\x2d25d9\x2d7640\x2df121\x2d1a2baab3785c.mount: Deactivated successfully. Nov 5 15:53:41.080572 systemd[1]: run-netns-cni\x2d6f2bafe3\x2d8072\x2d291b\x2d12e0\x2dc74b8de0eae2.mount: Deactivated successfully. Nov 5 15:53:41.080680 systemd[1]: run-netns-cni\x2d2233faa3\x2d4b1a\x2d41b4\x2dcbc9\x2d20ca2acc4a2c.mount: Deactivated successfully. Nov 5 15:53:41.080791 systemd[1]: run-netns-cni\x2d82beb904\x2d2559\x2d46af\x2d4bb7\x2d259459fcbbcd.mount: Deactivated successfully. Nov 5 15:53:41.147684 containerd[2489]: time="2025-11-05T15:53:41.147604130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:53:48.147152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011138808.mount: Deactivated successfully. Nov 5 15:53:48.173350 containerd[2489]: time="2025-11-05T15:53:48.173309262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:48.175543 containerd[2489]: time="2025-11-05T15:53:48.175517557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:53:48.178081 containerd[2489]: time="2025-11-05T15:53:48.178037772Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:48.181658 containerd[2489]: time="2025-11-05T15:53:48.181611382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:48.182067 containerd[2489]: time="2025-11-05T15:53:48.181964817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.03431954s" Nov 5 15:53:48.182067 containerd[2489]: time="2025-11-05T15:53:48.181995108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:53:48.198771 containerd[2489]: time="2025-11-05T15:53:48.198742395Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:53:48.218431 containerd[2489]: time="2025-11-05T15:53:48.218403948Z" level=info msg="Container de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:48.233444 containerd[2489]: time="2025-11-05T15:53:48.233412835Z" level=info msg="CreateContainer within sandbox \"d7d0a417fe0d73451ab7e72abacae9c7d62ed7909fc529fb0c15082194083631\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\"" Nov 5 15:53:48.234316 containerd[2489]: time="2025-11-05T15:53:48.233773605Z" level=info msg="StartContainer for \"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\"" Nov 5 15:53:48.235709 containerd[2489]: time="2025-11-05T15:53:48.235682955Z" level=info msg="connecting to shim de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d" address="unix:///run/containerd/s/ca2e0af5b3f547a278fcdc7319b1842701bb00e13ca5a44cd03563a8bc3edc6d" protocol=ttrpc version=3 Nov 5 15:53:48.252417 systemd[1]: Started cri-containerd-de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d.scope - libcontainer container de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d. Nov 5 15:53:48.285658 containerd[2489]: time="2025-11-05T15:53:48.285634867Z" level=info msg="StartContainer for \"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" returns successfully" Nov 5 15:53:48.659575 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:53:48.659690 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:53:48.814593 kubelet[3926]: I1105 15:53:48.814551 3926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-backend-key-pair\") pod \"28e6161d-8a43-4410-bad8-024e6e11b082\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " Nov 5 15:53:48.815318 kubelet[3926]: I1105 15:53:48.815301 3926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-ca-bundle\") pod \"28e6161d-8a43-4410-bad8-024e6e11b082\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " Nov 5 15:53:48.815507 kubelet[3926]: I1105 15:53:48.815405 3926 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlfml\" (UniqueName: \"kubernetes.io/projected/28e6161d-8a43-4410-bad8-024e6e11b082-kube-api-access-wlfml\") pod \"28e6161d-8a43-4410-bad8-024e6e11b082\" (UID: \"28e6161d-8a43-4410-bad8-024e6e11b082\") " Nov 5 15:53:48.817774 kubelet[3926]: I1105 15:53:48.817458 3926 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "28e6161d-8a43-4410-bad8-024e6e11b082" (UID: "28e6161d-8a43-4410-bad8-024e6e11b082"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:53:48.823437 kubelet[3926]: I1105 15:53:48.823401 3926 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "28e6161d-8a43-4410-bad8-024e6e11b082" (UID: "28e6161d-8a43-4410-bad8-024e6e11b082"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:53:48.823950 kubelet[3926]: I1105 15:53:48.823927 3926 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e6161d-8a43-4410-bad8-024e6e11b082-kube-api-access-wlfml" (OuterVolumeSpecName: "kube-api-access-wlfml") pod "28e6161d-8a43-4410-bad8-024e6e11b082" (UID: "28e6161d-8a43-4410-bad8-024e6e11b082"). InnerVolumeSpecName "kube-api-access-wlfml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:53:48.916571 kubelet[3926]: I1105 15:53:48.916485 3926 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-backend-key-pair\") on node \"ci-4487.0.1-a-e6d953e7e7\" DevicePath \"\"" Nov 5 15:53:48.916571 kubelet[3926]: I1105 15:53:48.916508 3926 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e6161d-8a43-4410-bad8-024e6e11b082-whisker-ca-bundle\") on node \"ci-4487.0.1-a-e6d953e7e7\" DevicePath \"\"" Nov 5 15:53:48.916571 kubelet[3926]: I1105 15:53:48.916517 3926 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlfml\" (UniqueName: \"kubernetes.io/projected/28e6161d-8a43-4410-bad8-024e6e11b082-kube-api-access-wlfml\") on node \"ci-4487.0.1-a-e6d953e7e7\" DevicePath \"\"" Nov 5 15:53:49.147167 systemd[1]: var-lib-kubelet-pods-28e6161d\x2d8a43\x2d4410\x2dbad8\x2d024e6e11b082-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlfml.mount: Deactivated successfully. Nov 5 15:53:49.147267 systemd[1]: var-lib-kubelet-pods-28e6161d\x2d8a43\x2d4410\x2dbad8\x2d024e6e11b082-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:53:49.169146 systemd[1]: Removed slice kubepods-besteffort-pod28e6161d_8a43_4410_bad8_024e6e11b082.slice - libcontainer container kubepods-besteffort-pod28e6161d_8a43_4410_bad8_024e6e11b082.slice. Nov 5 15:53:49.198777 kubelet[3926]: I1105 15:53:49.198605 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jkmk8" podStartSLOduration=2.251614728 podStartE2EDuration="33.198586167s" podCreationTimestamp="2025-11-05 15:53:16 +0000 UTC" firstStartedPulling="2025-11-05 15:53:17.235608094 +0000 UTC m=+19.316864734" lastFinishedPulling="2025-11-05 15:53:48.182579535 +0000 UTC m=+50.263836173" observedRunningTime="2025-11-05 15:53:49.182777754 +0000 UTC m=+51.264034408" watchObservedRunningTime="2025-11-05 15:53:49.198586167 +0000 UTC m=+51.279842870" Nov 5 15:53:49.257295 systemd[1]: Created slice kubepods-besteffort-pod80c81441_a30e_43a5_948f_5a1c2800b71c.slice - libcontainer container kubepods-besteffort-pod80c81441_a30e_43a5_948f_5a1c2800b71c.slice. Nov 5 15:53:49.319238 kubelet[3926]: I1105 15:53:49.319167 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80c81441-a30e-43a5-948f-5a1c2800b71c-whisker-backend-key-pair\") pod \"whisker-845d8f4b9b-8qsjt\" (UID: \"80c81441-a30e-43a5-948f-5a1c2800b71c\") " pod="calico-system/whisker-845d8f4b9b-8qsjt" Nov 5 15:53:49.319238 kubelet[3926]: I1105 15:53:49.319242 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4wr\" (UniqueName: \"kubernetes.io/projected/80c81441-a30e-43a5-948f-5a1c2800b71c-kube-api-access-jz4wr\") pod \"whisker-845d8f4b9b-8qsjt\" (UID: \"80c81441-a30e-43a5-948f-5a1c2800b71c\") " pod="calico-system/whisker-845d8f4b9b-8qsjt" Nov 5 15:53:49.319454 kubelet[3926]: I1105 15:53:49.319289 3926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c81441-a30e-43a5-948f-5a1c2800b71c-whisker-ca-bundle\") pod \"whisker-845d8f4b9b-8qsjt\" (UID: \"80c81441-a30e-43a5-948f-5a1c2800b71c\") " pod="calico-system/whisker-845d8f4b9b-8qsjt" Nov 5 15:53:49.562183 containerd[2489]: time="2025-11-05T15:53:49.562074220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-845d8f4b9b-8qsjt,Uid:80c81441-a30e-43a5-948f-5a1c2800b71c,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:49.668624 systemd-networkd[2265]: cali03c0b5945aa: Link UP Nov 5 15:53:49.670320 systemd-networkd[2265]: cali03c0b5945aa: Gained carrier Nov 5 15:53:49.683314 containerd[2489]: 2025-11-05 15:53:49.586 [INFO][5022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:53:49.683314 containerd[2489]: 2025-11-05 15:53:49.595 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0 whisker-845d8f4b9b- calico-system 80c81441-a30e-43a5-948f-5a1c2800b71c 914 0 2025-11-05 15:53:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:845d8f4b9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 whisker-845d8f4b9b-8qsjt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali03c0b5945aa [] [] }} ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-" Nov 5 15:53:49.683314 containerd[2489]: 2025-11-05 15:53:49.595 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683314 containerd[2489]: 2025-11-05 15:53:49.616 [INFO][5033] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" HandleID="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.616 [INFO][5033] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" HandleID="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"whisker-845d8f4b9b-8qsjt", "timestamp":"2025-11-05 15:53:49.616557617 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.617 [INFO][5033] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.617 [INFO][5033] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.617 [INFO][5033] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.622 [INFO][5033] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.625 [INFO][5033] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.628 [INFO][5033] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.629 [INFO][5033] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683537 containerd[2489]: 2025-11-05 15:53:49.631 [INFO][5033] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.631 [INFO][5033] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.632 [INFO][5033] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90 Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.639 [INFO][5033] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.644 [INFO][5033] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.1/26] block=192.168.100.0/26 handle="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.644 [INFO][5033] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.1/26] handle="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.644 [INFO][5033] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:49.683734 containerd[2489]: 2025-11-05 15:53:49.644 [INFO][5033] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.1/26] IPv6=[] ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" HandleID="k8s-pod-network.ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683846 containerd[2489]: 2025-11-05 15:53:49.647 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0", GenerateName:"whisker-845d8f4b9b-", Namespace:"calico-system", SelfLink:"", UID:"80c81441-a30e-43a5-948f-5a1c2800b71c", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"845d8f4b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"whisker-845d8f4b9b-8qsjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03c0b5945aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:49.683846 containerd[2489]: 2025-11-05 15:53:49.647 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.1/32] ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683909 containerd[2489]: 2025-11-05 15:53:49.647 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03c0b5945aa ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683909 containerd[2489]: 2025-11-05 15:53:49.669 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.683941 containerd[2489]: 2025-11-05 15:53:49.669 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0", GenerateName:"whisker-845d8f4b9b-", Namespace:"calico-system", SelfLink:"", UID:"80c81441-a30e-43a5-948f-5a1c2800b71c", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"845d8f4b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90", Pod:"whisker-845d8f4b9b-8qsjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali03c0b5945aa", MAC:"a6:8c:93:c9:7d:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:49.683994 containerd[2489]: 2025-11-05 15:53:49.680 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" Namespace="calico-system" Pod="whisker-845d8f4b9b-8qsjt" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-whisker--845d8f4b9b--8qsjt-eth0" Nov 5 15:53:49.722124 containerd[2489]: time="2025-11-05T15:53:49.721658049Z" level=info msg="connecting to shim ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90" address="unix:///run/containerd/s/68a183f51b148e18b4c0dd60119076c8cb914ed88cbaa9ad14c442323da611c3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:49.740428 systemd[1]: Started cri-containerd-ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90.scope - libcontainer container ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90. Nov 5 15:53:49.781156 containerd[2489]: time="2025-11-05T15:53:49.781113994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-845d8f4b9b-8qsjt,Uid:80c81441-a30e-43a5-948f-5a1c2800b71c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba057437936d3ebea17206ae15b20244d3a436e417ba55dab4aab168968a1e90\"" Nov 5 15:53:49.782614 containerd[2489]: time="2025-11-05T15:53:49.782578914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:53:50.036134 kubelet[3926]: I1105 15:53:50.036087 3926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e6161d-8a43-4410-bad8-024e6e11b082" path="/var/lib/kubelet/pods/28e6161d-8a43-4410-bad8-024e6e11b082/volumes" Nov 5 15:53:50.339595 containerd[2489]: time="2025-11-05T15:53:50.339425579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:50.345984 containerd[2489]: time="2025-11-05T15:53:50.345863711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:53:50.345984 containerd[2489]: time="2025-11-05T15:53:50.345884883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:53:50.346144 kubelet[3926]: E1105 15:53:50.346104 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:50.346188 kubelet[3926]: E1105 15:53:50.346166 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:50.346401 kubelet[3926]: E1105 15:53:50.346340 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80c9a76a2ed646769b0abd352af315e9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:50.348900 containerd[2489]: time="2025-11-05T15:53:50.348820937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:53:50.600730 systemd-networkd[2265]: vxlan.calico: Link UP Nov 5 15:53:50.600738 systemd-networkd[2265]: vxlan.calico: Gained carrier Nov 5 15:53:50.623380 containerd[2489]: time="2025-11-05T15:53:50.622510866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:50.626007 containerd[2489]: time="2025-11-05T15:53:50.625857326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:50.626007 containerd[2489]: time="2025-11-05T15:53:50.625861588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:53:50.627594 kubelet[3926]: E1105 15:53:50.626238 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:50.627594 kubelet[3926]: E1105 15:53:50.626648 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:50.627745 kubelet[3926]: E1105 15:53:50.627382 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:50.629040 kubelet[3926]: E1105 15:53:50.628993 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:53:50.936440 systemd-networkd[2265]: cali03c0b5945aa: Gained IPv6LL Nov 5 15:53:51.170387 kubelet[3926]: E1105 15:53:51.170338 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:53:52.033010 containerd[2489]: time="2025-11-05T15:53:52.032679413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6458fdcf-zcgg8,Uid:44ab8120-7220-47ae-93cc-8b7e8505e744,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:52.176896 systemd-networkd[2265]: cali87ae71648fb: Link UP Nov 5 15:53:52.178015 systemd-networkd[2265]: cali87ae71648fb: Gained carrier Nov 5 15:53:52.193523 containerd[2489]: 2025-11-05 15:53:52.121 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0 calico-kube-controllers-7b6458fdcf- calico-system 44ab8120-7220-47ae-93cc-8b7e8505e744 846 0 2025-11-05 15:53:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b6458fdcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 calico-kube-controllers-7b6458fdcf-zcgg8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali87ae71648fb [] [] }} ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-" Nov 5 15:53:52.193523 containerd[2489]: 2025-11-05 15:53:52.121 [INFO][5288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.193523 containerd[2489]: 2025-11-05 15:53:52.145 [INFO][5300] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" HandleID="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.145 [INFO][5300] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" HandleID="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"calico-kube-controllers-7b6458fdcf-zcgg8", "timestamp":"2025-11-05 15:53:52.145129547 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.145 [INFO][5300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.145 [INFO][5300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.145 [INFO][5300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.150 [INFO][5300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.153 [INFO][5300] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.156 [INFO][5300] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.157 [INFO][5300] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.193889 containerd[2489]: 2025-11-05 15:53:52.159 [INFO][5300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.159 [INFO][5300] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.160 [INFO][5300] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0 Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.166 [INFO][5300] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.172 [INFO][5300] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.2/26] block=192.168.100.0/26 handle="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.172 [INFO][5300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.2/26] handle="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.172 [INFO][5300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:52.194121 containerd[2489]: 2025-11-05 15:53:52.172 [INFO][5300] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.2/26] IPv6=[] ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" HandleID="k8s-pod-network.77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.194865 containerd[2489]: 2025-11-05 15:53:52.173 [INFO][5288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0", GenerateName:"calico-kube-controllers-7b6458fdcf-", Namespace:"calico-system", SelfLink:"", UID:"44ab8120-7220-47ae-93cc-8b7e8505e744", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6458fdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"calico-kube-controllers-7b6458fdcf-zcgg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali87ae71648fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:52.194971 containerd[2489]: 2025-11-05 15:53:52.174 [INFO][5288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.2/32] ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.194971 containerd[2489]: 2025-11-05 15:53:52.174 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87ae71648fb ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.194971 containerd[2489]: 2025-11-05 15:53:52.177 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.195049 containerd[2489]: 2025-11-05 15:53:52.178 [INFO][5288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0", GenerateName:"calico-kube-controllers-7b6458fdcf-", Namespace:"calico-system", SelfLink:"", UID:"44ab8120-7220-47ae-93cc-8b7e8505e744", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6458fdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0", Pod:"calico-kube-controllers-7b6458fdcf-zcgg8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali87ae71648fb", MAC:"22:69:47:76:00:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:52.195108 containerd[2489]: 2025-11-05 15:53:52.189 [INFO][5288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" Namespace="calico-system" Pod="calico-kube-controllers-7b6458fdcf-zcgg8" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--kube--controllers--7b6458fdcf--zcgg8-eth0" Nov 5 15:53:52.244506 containerd[2489]: time="2025-11-05T15:53:52.244447686Z" level=info msg="connecting to shim 77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0" address="unix:///run/containerd/s/c357c9cebdcceea54bb36209e1bb93a01c743c18a6dee74564f53fe4094a1ea1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:52.269431 systemd[1]: Started cri-containerd-77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0.scope - libcontainer container 77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0. Nov 5 15:53:52.310583 containerd[2489]: time="2025-11-05T15:53:52.310377626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6458fdcf-zcgg8,Uid:44ab8120-7220-47ae-93cc-8b7e8505e744,Namespace:calico-system,Attempt:0,} returns sandbox id \"77dc4a4dc723fa3f333efdf466a5d574964c5c7212ecd5f0168f05958f5b31b0\"" Nov 5 15:53:52.313346 containerd[2489]: time="2025-11-05T15:53:52.313267002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:53:52.559295 containerd[2489]: time="2025-11-05T15:53:52.559226813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:52.562831 containerd[2489]: time="2025-11-05T15:53:52.562539697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:53:52.562831 containerd[2489]: time="2025-11-05T15:53:52.562538956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:52.562931 kubelet[3926]: E1105 15:53:52.562779 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:52.562931 kubelet[3926]: E1105 15:53:52.562827 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:52.564338 kubelet[3926]: E1105 15:53:52.563183 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hwpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:52.564704 kubelet[3926]: E1105 15:53:52.564656 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:53:52.600421 systemd-networkd[2265]: vxlan.calico: Gained IPv6LL Nov 5 15:53:53.033263 containerd[2489]: time="2025-11-05T15:53:53.033208239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xb2bl,Uid:6345634e-739d-4c50-8a09-88a959b92cba,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:53.033662 containerd[2489]: time="2025-11-05T15:53:53.033208234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nllfx,Uid:0f742570-0c09-4ef6-8800-4cac3ba577e3,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:53.160247 systemd-networkd[2265]: calib77e3df73bf: Link UP Nov 5 15:53:53.161694 systemd-networkd[2265]: calib77e3df73bf: Gained carrier Nov 5 15:53:53.176709 kubelet[3926]: E1105 15:53:53.176627 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:53:53.179557 containerd[2489]: 2025-11-05 15:53:53.089 [INFO][5362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0 csi-node-driver- calico-system 0f742570-0c09-4ef6-8800-4cac3ba577e3 705 0 2025-11-05 15:53:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 csi-node-driver-nllfx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib77e3df73bf [] [] }} ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-" Nov 5 15:53:53.179557 containerd[2489]: 2025-11-05 15:53:53.091 [INFO][5362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.179557 containerd[2489]: 2025-11-05 15:53:53.125 [INFO][5388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" HandleID="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.126 [INFO][5388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" HandleID="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"csi-node-driver-nllfx", "timestamp":"2025-11-05 15:53:53.125862222 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.126 [INFO][5388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.126 [INFO][5388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.126 [INFO][5388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.133 [INFO][5388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.136 [INFO][5388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.139 [INFO][5388] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.140 [INFO][5388] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.179749 containerd[2489]: 2025-11-05 15:53:53.142 [INFO][5388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.142 [INFO][5388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.143 [INFO][5388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4 Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.147 [INFO][5388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.3/26] block=192.168.100.0/26 handle="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.3/26] handle="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:53.180000 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.3/26] IPv6=[] ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" HandleID="k8s-pod-network.2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.180154 containerd[2489]: 2025-11-05 15:53:53.157 [INFO][5362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f742570-0c09-4ef6-8800-4cac3ba577e3", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"csi-node-driver-nllfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib77e3df73bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:53.180225 containerd[2489]: 2025-11-05 15:53:53.157 [INFO][5362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.3/32] ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.180225 containerd[2489]: 2025-11-05 15:53:53.157 [INFO][5362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib77e3df73bf ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.180225 containerd[2489]: 2025-11-05 15:53:53.160 [INFO][5362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.181518 containerd[2489]: 2025-11-05 15:53:53.160 [INFO][5362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f742570-0c09-4ef6-8800-4cac3ba577e3", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4", Pod:"csi-node-driver-nllfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib77e3df73bf", MAC:"ca:08:65:c4:c1:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:53.181630 containerd[2489]: 2025-11-05 15:53:53.173 [INFO][5362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" Namespace="calico-system" Pod="csi-node-driver-nllfx" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-csi--node--driver--nllfx-eth0" Nov 5 15:53:53.222706 containerd[2489]: time="2025-11-05T15:53:53.222668988Z" level=info msg="connecting to shim 2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4" address="unix:///run/containerd/s/bc1ec9bc6541e3e54555ebf0e77b69d7195e46bfdaf0a6c934907450876d4624" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:53.248573 systemd[1]: Started cri-containerd-2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4.scope - libcontainer container 2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4. Nov 5 15:53:53.277385 systemd-networkd[2265]: calie71c4a2d0bb: Link UP Nov 5 15:53:53.281045 systemd-networkd[2265]: calie71c4a2d0bb: Gained carrier Nov 5 15:53:53.309344 containerd[2489]: 2025-11-05 15:53:53.098 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0 goldmane-666569f655- calico-system 6345634e-739d-4c50-8a09-88a959b92cba 847 0 2025-11-05 15:53:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 goldmane-666569f655-xb2bl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie71c4a2d0bb [] [] }} ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-" Nov 5 15:53:53.309344 containerd[2489]: 2025-11-05 15:53:53.098 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.309344 containerd[2489]: 2025-11-05 15:53:53.129 [INFO][5393] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" HandleID="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.129 [INFO][5393] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" HandleID="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"goldmane-666569f655-xb2bl", "timestamp":"2025-11-05 15:53:53.129030921 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.130 [INFO][5393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.155 [INFO][5393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.234 [INFO][5393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.238 [INFO][5393] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.245 [INFO][5393] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.246 [INFO][5393] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309516 containerd[2489]: 2025-11-05 15:53:53.251 [INFO][5393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.251 [INFO][5393] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.254 [INFO][5393] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053 Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.258 [INFO][5393] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.269 [INFO][5393] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.4/26] block=192.168.100.0/26 handle="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.269 [INFO][5393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.4/26] handle="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.269 [INFO][5393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:53.309742 containerd[2489]: 2025-11-05 15:53:53.269 [INFO][5393] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.4/26] IPv6=[] ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" HandleID="k8s-pod-network.3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.309899 containerd[2489]: 2025-11-05 15:53:53.272 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6345634e-739d-4c50-8a09-88a959b92cba", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"goldmane-666569f655-xb2bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie71c4a2d0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:53.309965 containerd[2489]: 2025-11-05 15:53:53.273 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.4/32] ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.309965 containerd[2489]: 2025-11-05 15:53:53.273 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie71c4a2d0bb ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.309965 containerd[2489]: 2025-11-05 15:53:53.280 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.310035 containerd[2489]: 2025-11-05 15:53:53.281 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6345634e-739d-4c50-8a09-88a959b92cba", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053", Pod:"goldmane-666569f655-xb2bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie71c4a2d0bb", MAC:"1e:3d:d9:27:15:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:53.310096 containerd[2489]: 2025-11-05 15:53:53.307 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" Namespace="calico-system" Pod="goldmane-666569f655-xb2bl" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-goldmane--666569f655--xb2bl-eth0" Nov 5 15:53:53.365754 containerd[2489]: time="2025-11-05T15:53:53.365713300Z" level=info msg="connecting to shim 3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053" address="unix:///run/containerd/s/7c48d2b5bf23165f1d9de4f6032c26021ea971393e8f199cf7fc206d20b33467" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:53.401419 systemd[1]: Started cri-containerd-3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053.scope - libcontainer container 3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053. Nov 5 15:53:53.426501 containerd[2489]: time="2025-11-05T15:53:53.426470135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nllfx,Uid:0f742570-0c09-4ef6-8800-4cac3ba577e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2893167d12d8dabb3d9534f2b85132064d909f6b1f2378e7769383dc07bdf2b4\"" Nov 5 15:53:53.430012 containerd[2489]: time="2025-11-05T15:53:53.429634143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:53:53.465429 containerd[2489]: time="2025-11-05T15:53:53.465404234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xb2bl,Uid:6345634e-739d-4c50-8a09-88a959b92cba,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a0ebb146d239e3feedadf65757b900f09211436005ee2bad34afe643a2fe053\"" Nov 5 15:53:53.691507 containerd[2489]: time="2025-11-05T15:53:53.691461543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:53.694860 containerd[2489]: time="2025-11-05T15:53:53.694834613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:53:53.695296 containerd[2489]: time="2025-11-05T15:53:53.694911326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:53:53.695357 kubelet[3926]: E1105 15:53:53.695053 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:53.695357 kubelet[3926]: E1105 15:53:53.695091 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:53.696148 containerd[2489]: time="2025-11-05T15:53:53.695980020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:53:53.696828 kubelet[3926]: E1105 15:53:53.695782 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:53.934900 containerd[2489]: time="2025-11-05T15:53:53.934846526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:53.938914 containerd[2489]: time="2025-11-05T15:53:53.938877277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:53:53.938969 containerd[2489]: time="2025-11-05T15:53:53.938959687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:53.939129 kubelet[3926]: E1105 15:53:53.939092 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:53.939198 kubelet[3926]: E1105 15:53:53.939154 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:53.939544 containerd[2489]: time="2025-11-05T15:53:53.939468046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:53:53.939786 kubelet[3926]: E1105 15:53:53.939647 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b55nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:53.940897 kubelet[3926]: E1105 15:53:53.940857 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:53:53.944467 systemd-networkd[2265]: cali87ae71648fb: Gained IPv6LL Nov 5 15:53:54.036954 containerd[2489]: time="2025-11-05T15:53:54.036728002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-dp5nk,Uid:c790f420-0686-46b7-ac9a-3d5362dc937f,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:54.129988 systemd-networkd[2265]: cali086c01d6051: Link UP Nov 5 15:53:54.130926 systemd-networkd[2265]: cali086c01d6051: Gained carrier Nov 5 15:53:54.144477 containerd[2489]: 2025-11-05 15:53:54.078 [INFO][5510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0 calico-apiserver-6445c55d69- calico-apiserver c790f420-0686-46b7-ac9a-3d5362dc937f 844 0 2025-11-05 15:53:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6445c55d69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 calico-apiserver-6445c55d69-dp5nk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali086c01d6051 [] [] }} ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-" Nov 5 15:53:54.144477 containerd[2489]: 2025-11-05 15:53:54.078 [INFO][5510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.144477 containerd[2489]: 2025-11-05 15:53:54.097 [INFO][5521] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" HandleID="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.097 [INFO][5521] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" HandleID="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"calico-apiserver-6445c55d69-dp5nk", "timestamp":"2025-11-05 15:53:54.097429216 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.097 [INFO][5521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.097 [INFO][5521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.097 [INFO][5521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.102 [INFO][5521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.107 [INFO][5521] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.110 [INFO][5521] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.111 [INFO][5521] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.144842 containerd[2489]: 2025-11-05 15:53:54.112 [INFO][5521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.112 [INFO][5521] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.113 [INFO][5521] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0 Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.117 [INFO][5521] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.125 [INFO][5521] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.5/26] block=192.168.100.0/26 handle="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.125 [INFO][5521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.5/26] handle="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.125 [INFO][5521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:54.145669 containerd[2489]: 2025-11-05 15:53:54.125 [INFO][5521] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.5/26] IPv6=[] ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" HandleID="k8s-pod-network.658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.146274 containerd[2489]: 2025-11-05 15:53:54.126 [INFO][5510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0", GenerateName:"calico-apiserver-6445c55d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"c790f420-0686-46b7-ac9a-3d5362dc937f", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6445c55d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"calico-apiserver-6445c55d69-dp5nk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali086c01d6051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:54.146487 containerd[2489]: 2025-11-05 15:53:54.126 [INFO][5510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.5/32] ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.146487 containerd[2489]: 2025-11-05 15:53:54.126 [INFO][5510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali086c01d6051 ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.146487 containerd[2489]: 2025-11-05 15:53:54.131 [INFO][5510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.146560 containerd[2489]: 2025-11-05 15:53:54.132 [INFO][5510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0", GenerateName:"calico-apiserver-6445c55d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"c790f420-0686-46b7-ac9a-3d5362dc937f", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6445c55d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0", Pod:"calico-apiserver-6445c55d69-dp5nk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali086c01d6051", MAC:"be:69:99:d4:de:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:54.146622 containerd[2489]: 2025-11-05 15:53:54.142 [INFO][5510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-dp5nk" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--dp5nk-eth0" Nov 5 15:53:54.178946 kubelet[3926]: E1105 15:53:54.178914 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:53:54.180576 kubelet[3926]: E1105 15:53:54.180534 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:53:54.181385 containerd[2489]: time="2025-11-05T15:53:54.181304728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:54.187605 containerd[2489]: time="2025-11-05T15:53:54.187570704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:53:54.187605 containerd[2489]: time="2025-11-05T15:53:54.187667084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:53:54.188439 kubelet[3926]: E1105 15:53:54.188401 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:54.188890 kubelet[3926]: E1105 15:53:54.188715 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:54.188890 kubelet[3926]: E1105 15:53:54.188829 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:54.189984 kubelet[3926]: E1105 15:53:54.189957 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:54.200399 systemd-networkd[2265]: calib77e3df73bf: Gained IPv6LL Nov 5 15:53:54.225306 containerd[2489]: time="2025-11-05T15:53:54.225094144Z" level=info msg="connecting to shim 658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0" address="unix:///run/containerd/s/530e6f6b8df94cef244cee6b6dec616dc6e00fccdcf9bc0f8701ca5146c0ff08" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:54.249421 systemd[1]: Started cri-containerd-658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0.scope - libcontainer container 658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0. Nov 5 15:53:54.289915 containerd[2489]: time="2025-11-05T15:53:54.289862295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-dp5nk,Uid:c790f420-0686-46b7-ac9a-3d5362dc937f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"658009a953afdd9b8dbbf98cccd1acde446bd2cbf4bef5da4a0584e7e3d0d1c0\"" Nov 5 15:53:54.291637 containerd[2489]: time="2025-11-05T15:53:54.291614447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:54.538456 containerd[2489]: time="2025-11-05T15:53:54.538341195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:54.541313 containerd[2489]: time="2025-11-05T15:53:54.541215223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:54.541313 containerd[2489]: time="2025-11-05T15:53:54.541232619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:54.541573 kubelet[3926]: E1105 15:53:54.541523 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:54.541638 kubelet[3926]: E1105 15:53:54.541586 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:54.541966 kubelet[3926]: E1105 15:53:54.541723 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:54.543701 kubelet[3926]: E1105 15:53:54.543661 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:53:54.648410 systemd-networkd[2265]: calie71c4a2d0bb: Gained IPv6LL Nov 5 15:53:55.032927 containerd[2489]: time="2025-11-05T15:53:55.032881789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-kdh6x,Uid:65694c8b-c2eb-4f3f-8724-f2d844e7483e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:55.033222 containerd[2489]: time="2025-11-05T15:53:55.032881748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6xhgc,Uid:53ece130-7146-4442-9e4a-b716be345aed,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:55.158852 systemd-networkd[2265]: calidda81d7779d: Link UP Nov 5 15:53:55.159626 systemd-networkd[2265]: calidda81d7779d: Gained carrier Nov 5 15:53:55.172236 containerd[2489]: 2025-11-05 15:53:55.089 [INFO][5583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0 calico-apiserver-6445c55d69- calico-apiserver 65694c8b-c2eb-4f3f-8724-f2d844e7483e 848 0 2025-11-05 15:53:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6445c55d69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 calico-apiserver-6445c55d69-kdh6x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidda81d7779d [] [] }} ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-" Nov 5 15:53:55.172236 containerd[2489]: 2025-11-05 15:53:55.089 [INFO][5583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.172236 containerd[2489]: 2025-11-05 15:53:55.118 [INFO][5608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" HandleID="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.118 [INFO][5608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" HandleID="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"calico-apiserver-6445c55d69-kdh6x", "timestamp":"2025-11-05 15:53:55.118031516 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.118 [INFO][5608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.118 [INFO][5608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.118 [INFO][5608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.125 [INFO][5608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.128 [INFO][5608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.131 [INFO][5608] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.133 [INFO][5608] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.172889 containerd[2489]: 2025-11-05 15:53:55.135 [INFO][5608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.135 [INFO][5608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.137 [INFO][5608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.142 [INFO][5608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.6/26] block=192.168.100.0/26 handle="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.6/26] handle="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:55.173114 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.6/26] IPv6=[] ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" HandleID="k8s-pod-network.4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.173266 containerd[2489]: 2025-11-05 15:53:55.155 [INFO][5583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0", GenerateName:"calico-apiserver-6445c55d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"65694c8b-c2eb-4f3f-8724-f2d844e7483e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6445c55d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"calico-apiserver-6445c55d69-kdh6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidda81d7779d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:55.173361 containerd[2489]: 2025-11-05 15:53:55.155 [INFO][5583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.6/32] ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.173361 containerd[2489]: 2025-11-05 15:53:55.155 [INFO][5583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidda81d7779d ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.173361 containerd[2489]: 2025-11-05 15:53:55.159 [INFO][5583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.173432 containerd[2489]: 2025-11-05 15:53:55.159 [INFO][5583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0", GenerateName:"calico-apiserver-6445c55d69-", Namespace:"calico-apiserver", SelfLink:"", UID:"65694c8b-c2eb-4f3f-8724-f2d844e7483e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6445c55d69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc", Pod:"calico-apiserver-6445c55d69-kdh6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidda81d7779d", MAC:"ae:5d:ac:b3:27:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:55.173488 containerd[2489]: 2025-11-05 15:53:55.169 [INFO][5583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" Namespace="calico-apiserver" Pod="calico-apiserver-6445c55d69-kdh6x" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-calico--apiserver--6445c55d69--kdh6x-eth0" Nov 5 15:53:55.186346 kubelet[3926]: E1105 15:53:55.185881 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:53:55.186346 kubelet[3926]: E1105 15:53:55.186306 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:53:55.187534 kubelet[3926]: E1105 15:53:55.187495 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:53:55.224590 containerd[2489]: time="2025-11-05T15:53:55.224482218Z" level=info msg="connecting to shim 4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc" address="unix:///run/containerd/s/798f62bd3776d3d3073a8097a243bfa7a9e8d369e9031371d20235d705337163" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:55.268793 systemd[1]: Started cri-containerd-4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc.scope - libcontainer container 4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc. Nov 5 15:53:55.309180 systemd-networkd[2265]: calia00cd1974f1: Link UP Nov 5 15:53:55.310037 systemd-networkd[2265]: calia00cd1974f1: Gained carrier Nov 5 15:53:55.340112 containerd[2489]: 2025-11-05 15:53:55.095 [INFO][5593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0 coredns-674b8bbfcf- kube-system 53ece130-7146-4442-9e4a-b716be345aed 837 0 2025-11-05 15:53:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 coredns-674b8bbfcf-6xhgc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia00cd1974f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-" Nov 5 15:53:55.340112 containerd[2489]: 2025-11-05 15:53:55.095 [INFO][5593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340112 containerd[2489]: 2025-11-05 15:53:55.123 [INFO][5613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" HandleID="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.123 [INFO][5613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" HandleID="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"coredns-674b8bbfcf-6xhgc", "timestamp":"2025-11-05 15:53:55.123104091 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.123 [INFO][5613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.152 [INFO][5613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.229 [INFO][5613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.252 [INFO][5613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.267 [INFO][5613] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.272 [INFO][5613] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340538 containerd[2489]: 2025-11-05 15:53:55.275 [INFO][5613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.276 [INFO][5613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.279 [INFO][5613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757 Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.289 [INFO][5613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.303 [INFO][5613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.7/26] block=192.168.100.0/26 handle="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.304 [INFO][5613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.7/26] handle="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.304 [INFO][5613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:55.340782 containerd[2489]: 2025-11-05 15:53:55.304 [INFO][5613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.7/26] IPv6=[] ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" HandleID="k8s-pod-network.1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.306 [INFO][5593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53ece130-7146-4442-9e4a-b716be345aed", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"coredns-674b8bbfcf-6xhgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia00cd1974f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.306 [INFO][5593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.7/32] ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.306 [INFO][5593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia00cd1974f1 ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.310 [INFO][5593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.311 [INFO][5593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"53ece130-7146-4442-9e4a-b716be345aed", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757", Pod:"coredns-674b8bbfcf-6xhgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia00cd1974f1", MAC:"a6:63:c9:77:32:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:55.340948 containerd[2489]: 2025-11-05 15:53:55.338 [INFO][5593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" Namespace="kube-system" Pod="coredns-674b8bbfcf-6xhgc" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--6xhgc-eth0" Nov 5 15:53:55.398853 containerd[2489]: time="2025-11-05T15:53:55.398471388Z" level=info msg="connecting to shim 1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757" address="unix:///run/containerd/s/3574432b2fa44993f8f00e568db2290716280b1345d428655faf3a4c3dcefce0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:55.415696 containerd[2489]: time="2025-11-05T15:53:55.415189614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6445c55d69-kdh6x,Uid:65694c8b-c2eb-4f3f-8724-f2d844e7483e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4693cbd3050a58d16b29c39fe27dc0821ac2af4bf867dded3d4aabf3933a82bc\"" Nov 5 15:53:55.422016 containerd[2489]: time="2025-11-05T15:53:55.421986930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:55.438623 systemd[1]: Started cri-containerd-1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757.scope - libcontainer container 1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757. Nov 5 15:53:55.508407 containerd[2489]: time="2025-11-05T15:53:55.508271649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6xhgc,Uid:53ece130-7146-4442-9e4a-b716be345aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757\"" Nov 5 15:53:55.516308 containerd[2489]: time="2025-11-05T15:53:55.516227132Z" level=info msg="CreateContainer within sandbox \"1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:53:55.535993 containerd[2489]: time="2025-11-05T15:53:55.535969070Z" level=info msg="Container 5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:55.541367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726898401.mount: Deactivated successfully. Nov 5 15:53:55.544425 systemd-networkd[2265]: cali086c01d6051: Gained IPv6LL Nov 5 15:53:55.552876 containerd[2489]: time="2025-11-05T15:53:55.552845773Z" level=info msg="CreateContainer within sandbox \"1743c4ce790c01634cd8cf111b965c5c7b105b0b56cad3b5670cfd8eb5c6d757\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e\"" Nov 5 15:53:55.553866 containerd[2489]: time="2025-11-05T15:53:55.553836890Z" level=info msg="StartContainer for \"5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e\"" Nov 5 15:53:55.554840 containerd[2489]: time="2025-11-05T15:53:55.554788267Z" level=info msg="connecting to shim 5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e" address="unix:///run/containerd/s/3574432b2fa44993f8f00e568db2290716280b1345d428655faf3a4c3dcefce0" protocol=ttrpc version=3 Nov 5 15:53:55.575607 systemd[1]: Started cri-containerd-5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e.scope - libcontainer container 5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e. Nov 5 15:53:55.616232 containerd[2489]: time="2025-11-05T15:53:55.616169029Z" level=info msg="StartContainer for \"5bdfc5309f39b8cf71faceb86aa9588b1d7e2f3c8abc9a817b554f9009c3276e\" returns successfully" Nov 5 15:53:55.706518 containerd[2489]: time="2025-11-05T15:53:55.706445756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:55.709538 containerd[2489]: time="2025-11-05T15:53:55.709402876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:55.709538 containerd[2489]: time="2025-11-05T15:53:55.709503947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:55.710101 kubelet[3926]: E1105 15:53:55.710051 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:55.710164 kubelet[3926]: E1105 15:53:55.710113 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:55.710327 kubelet[3926]: E1105 15:53:55.710289 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcbpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:55.711785 kubelet[3926]: E1105 15:53:55.711727 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:53:56.033628 containerd[2489]: time="2025-11-05T15:53:56.033347363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xndbm,Uid:73bb0ab3-5e34-4e2f-bfe9-ee75e359909c,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:56.136935 systemd-networkd[2265]: calid6c32fbc946: Link UP Nov 5 15:53:56.139353 systemd-networkd[2265]: calid6c32fbc946: Gained carrier Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.077 [INFO][5775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0 coredns-674b8bbfcf- kube-system 73bb0ab3-5e34-4e2f-bfe9-ee75e359909c 842 0 2025-11-05 15:53:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-a-e6d953e7e7 coredns-674b8bbfcf-xndbm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6c32fbc946 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.078 [INFO][5775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.100 [INFO][5786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" HandleID="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.100 [INFO][5786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" HandleID="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-a-e6d953e7e7", "pod":"coredns-674b8bbfcf-xndbm", "timestamp":"2025-11-05 15:53:56.100509335 +0000 UTC"}, Hostname:"ci-4487.0.1-a-e6d953e7e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.100 [INFO][5786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.100 [INFO][5786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.100 [INFO][5786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-e6d953e7e7' Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.105 [INFO][5786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.109 [INFO][5786] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.113 [INFO][5786] ipam/ipam.go 511: Trying affinity for 192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.114 [INFO][5786] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.116 [INFO][5786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.116 [INFO][5786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.117 [INFO][5786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39 Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.120 [INFO][5786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.132 [INFO][5786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.100.8/26] block=192.168.100.0/26 handle="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.132 [INFO][5786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.8/26] handle="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" host="ci-4487.0.1-a-e6d953e7e7" Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.132 [INFO][5786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:56.160761 containerd[2489]: 2025-11-05 15:53:56.132 [INFO][5786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.100.8/26] IPv6=[] ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" HandleID="k8s-pod-network.d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Workload="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.133 [INFO][5775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73bb0ab3-5e34-4e2f-bfe9-ee75e359909c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"", Pod:"coredns-674b8bbfcf-xndbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6c32fbc946", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.133 [INFO][5775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.8/32] ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.133 [INFO][5775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6c32fbc946 ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.142 [INFO][5775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.143 [INFO][5775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73bb0ab3-5e34-4e2f-bfe9-ee75e359909c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-e6d953e7e7", ContainerID:"d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39", Pod:"coredns-674b8bbfcf-xndbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6c32fbc946", MAC:"de:72:92:97:bb:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:56.161579 containerd[2489]: 2025-11-05 15:53:56.157 [INFO][5775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" Namespace="kube-system" Pod="coredns-674b8bbfcf-xndbm" WorkloadEndpoint="ci--4487.0.1--a--e6d953e7e7-k8s-coredns--674b8bbfcf--xndbm-eth0" Nov 5 15:53:56.187964 kubelet[3926]: E1105 15:53:56.187928 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:53:56.188574 kubelet[3926]: E1105 15:53:56.188014 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:53:56.200393 containerd[2489]: time="2025-11-05T15:53:56.200353517Z" level=info msg="connecting to shim d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39" address="unix:///run/containerd/s/71c298d1dde6c5a8f98fbb6c3eea6967bc4c16b6e40d9abb93876a7dae05b225" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:56.206558 kubelet[3926]: I1105 15:53:56.206477 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6xhgc" podStartSLOduration=53.206460311 podStartE2EDuration="53.206460311s" podCreationTimestamp="2025-11-05 15:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:56.202661549 +0000 UTC m=+58.283918196" watchObservedRunningTime="2025-11-05 15:53:56.206460311 +0000 UTC m=+58.287716949" Nov 5 15:53:56.234615 systemd[1]: Started cri-containerd-d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39.scope - libcontainer container d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39. Nov 5 15:53:56.309378 containerd[2489]: time="2025-11-05T15:53:56.308532756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xndbm,Uid:73bb0ab3-5e34-4e2f-bfe9-ee75e359909c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39\"" Nov 5 15:53:56.322869 containerd[2489]: time="2025-11-05T15:53:56.321301158Z" level=info msg="CreateContainer within sandbox \"d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:53:56.341240 containerd[2489]: time="2025-11-05T15:53:56.341081869Z" level=info msg="Container 6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:56.361565 containerd[2489]: time="2025-11-05T15:53:56.361545896Z" level=info msg="CreateContainer within sandbox \"d793511049a1acb4d38edbd15de8f51d5f413faa31e890901be17c0ebb3f8c39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973\"" Nov 5 15:53:56.362439 containerd[2489]: time="2025-11-05T15:53:56.362122347Z" level=info msg="StartContainer for \"6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973\"" Nov 5 15:53:56.363433 containerd[2489]: time="2025-11-05T15:53:56.363401934Z" level=info msg="connecting to shim 6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973" address="unix:///run/containerd/s/71c298d1dde6c5a8f98fbb6c3eea6967bc4c16b6e40d9abb93876a7dae05b225" protocol=ttrpc version=3 Nov 5 15:53:56.381423 systemd[1]: Started cri-containerd-6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973.scope - libcontainer container 6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973. Nov 5 15:53:56.409689 containerd[2489]: time="2025-11-05T15:53:56.409656196Z" level=info msg="StartContainer for \"6bd3b3e5b5a40eeaa2e81e08b10b1aa74bd80e315113ffdb8749339eaa751973\" returns successfully" Nov 5 15:53:56.824585 systemd-networkd[2265]: calia00cd1974f1: Gained IPv6LL Nov 5 15:53:56.888406 systemd-networkd[2265]: calidda81d7779d: Gained IPv6LL Nov 5 15:53:57.191615 kubelet[3926]: E1105 15:53:57.191575 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:53:57.206128 kubelet[3926]: I1105 15:53:57.206072 3926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xndbm" podStartSLOduration=54.206059144 podStartE2EDuration="54.206059144s" podCreationTimestamp="2025-11-05 15:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:57.203232182 +0000 UTC m=+59.284488850" watchObservedRunningTime="2025-11-05 15:53:57.206059144 +0000 UTC m=+59.287315777" Nov 5 15:53:57.720562 systemd-networkd[2265]: calid6c32fbc946: Gained IPv6LL Nov 5 15:54:05.033443 containerd[2489]: time="2025-11-05T15:54:05.033336699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:54:05.306481 containerd[2489]: time="2025-11-05T15:54:05.305898015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:05.311302 containerd[2489]: time="2025-11-05T15:54:05.310243903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:54:05.311302 containerd[2489]: time="2025-11-05T15:54:05.310301663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:54:05.311458 kubelet[3926]: E1105 15:54:05.310498 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:05.311458 kubelet[3926]: E1105 15:54:05.310549 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:05.311458 kubelet[3926]: E1105 15:54:05.310681 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80c9a76a2ed646769b0abd352af315e9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:05.312897 containerd[2489]: time="2025-11-05T15:54:05.312869429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:54:05.608267 containerd[2489]: time="2025-11-05T15:54:05.608217646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:05.612442 containerd[2489]: time="2025-11-05T15:54:05.612407541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:54:05.612513 containerd[2489]: time="2025-11-05T15:54:05.612495574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:05.613677 kubelet[3926]: E1105 15:54:05.612703 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:05.613677 kubelet[3926]: E1105 15:54:05.612757 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:05.613677 kubelet[3926]: E1105 15:54:05.612889 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:05.614373 kubelet[3926]: E1105 15:54:05.614326 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:54:06.033590 containerd[2489]: time="2025-11-05T15:54:06.033265772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:54:06.509844 containerd[2489]: time="2025-11-05T15:54:06.509790693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:06.515026 containerd[2489]: time="2025-11-05T15:54:06.514997714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:54:06.515086 containerd[2489]: time="2025-11-05T15:54:06.515078070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:06.515243 kubelet[3926]: E1105 15:54:06.515205 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:06.516294 kubelet[3926]: E1105 15:54:06.515256 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:06.516294 kubelet[3926]: E1105 15:54:06.515659 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hwpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:06.516456 containerd[2489]: time="2025-11-05T15:54:06.515569757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:54:06.517358 kubelet[3926]: E1105 15:54:06.517325 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:54:06.771002 containerd[2489]: time="2025-11-05T15:54:06.770886873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:06.776032 containerd[2489]: time="2025-11-05T15:54:06.775998432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:54:06.776120 containerd[2489]: time="2025-11-05T15:54:06.776086190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:06.776317 kubelet[3926]: E1105 15:54:06.776208 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:06.776317 kubelet[3926]: E1105 15:54:06.776260 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:06.776509 kubelet[3926]: E1105 15:54:06.776454 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b55nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:06.777730 kubelet[3926]: E1105 15:54:06.777689 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:54:07.033596 containerd[2489]: time="2025-11-05T15:54:07.033042834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:54:07.340334 containerd[2489]: time="2025-11-05T15:54:07.340301265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:07.344645 containerd[2489]: time="2025-11-05T15:54:07.344620606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:54:07.344736 containerd[2489]: time="2025-11-05T15:54:07.344678703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:54:07.344822 kubelet[3926]: E1105 15:54:07.344785 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:07.344881 kubelet[3926]: E1105 15:54:07.344831 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:07.344985 kubelet[3926]: E1105 15:54:07.344956 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:07.348058 containerd[2489]: time="2025-11-05T15:54:07.348020941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:54:07.614889 containerd[2489]: time="2025-11-05T15:54:07.614758787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:07.617610 containerd[2489]: time="2025-11-05T15:54:07.617567812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:54:07.617722 containerd[2489]: time="2025-11-05T15:54:07.617658884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:54:07.617893 kubelet[3926]: E1105 15:54:07.617860 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:07.618208 kubelet[3926]: E1105 15:54:07.617905 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:07.618208 kubelet[3926]: E1105 15:54:07.618044 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:07.619354 kubelet[3926]: E1105 15:54:07.619298 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:54:10.034172 containerd[2489]: time="2025-11-05T15:54:10.034066090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:10.282297 containerd[2489]: time="2025-11-05T15:54:10.282246301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:10.285485 containerd[2489]: time="2025-11-05T15:54:10.285356582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:10.285485 containerd[2489]: time="2025-11-05T15:54:10.285360620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:10.285692 kubelet[3926]: E1105 15:54:10.285536 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:10.285692 kubelet[3926]: E1105 15:54:10.285580 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:10.285980 kubelet[3926]: E1105 15:54:10.285720 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:10.287270 kubelet[3926]: E1105 15:54:10.287217 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:54:11.033297 containerd[2489]: time="2025-11-05T15:54:11.033244661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:11.273185 containerd[2489]: time="2025-11-05T15:54:11.273124210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:11.276076 containerd[2489]: time="2025-11-05T15:54:11.276038954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:11.276165 containerd[2489]: time="2025-11-05T15:54:11.276117691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:11.276266 kubelet[3926]: E1105 15:54:11.276225 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:11.276332 kubelet[3926]: E1105 15:54:11.276303 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:11.276726 kubelet[3926]: E1105 15:54:11.276446 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcbpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:11.278533 kubelet[3926]: E1105 15:54:11.278485 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:54:19.034405 kubelet[3926]: E1105 15:54:19.033931 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:54:19.034405 kubelet[3926]: E1105 15:54:19.034265 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:54:19.227297 containerd[2489]: time="2025-11-05T15:54:19.227163529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" id:\"0cfcfa805c980ae256dd46992d542e2b3251e328267d43ce08aea2b0a0926b7e\" pid:5923 exited_at:{seconds:1762358059 nanos:226810058}" Nov 5 15:54:19.297980 containerd[2489]: time="2025-11-05T15:54:19.297725264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" id:\"c4e3e9019dfccdae874862c98de62f8a0408c4d20f22d6880b2abb1602782beb\" pid:5947 exited_at:{seconds:1762358059 nanos:297532569}" Nov 5 15:54:20.038332 kubelet[3926]: E1105 15:54:20.038235 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:54:20.038849 kubelet[3926]: E1105 15:54:20.038390 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:54:23.033264 kubelet[3926]: E1105 15:54:23.033198 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:54:24.034307 kubelet[3926]: E1105 15:54:24.033834 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:54:31.034230 containerd[2489]: time="2025-11-05T15:54:31.034181111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:54:31.355991 containerd[2489]: time="2025-11-05T15:54:31.355938801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:31.359677 containerd[2489]: time="2025-11-05T15:54:31.359629650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:54:31.359794 containerd[2489]: time="2025-11-05T15:54:31.359655571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:31.359992 kubelet[3926]: E1105 15:54:31.359940 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:31.361210 kubelet[3926]: E1105 15:54:31.360009 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:31.361474 kubelet[3926]: E1105 15:54:31.361412 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b55nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:31.362700 kubelet[3926]: E1105 15:54:31.362660 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:54:32.037864 containerd[2489]: time="2025-11-05T15:54:32.037823048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:54:32.414743 containerd[2489]: time="2025-11-05T15:54:32.414688103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:32.420953 containerd[2489]: time="2025-11-05T15:54:32.420895077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:54:32.421067 containerd[2489]: time="2025-11-05T15:54:32.421011104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:54:32.423489 kubelet[3926]: E1105 15:54:32.423436 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:32.423827 kubelet[3926]: E1105 15:54:32.423505 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:32.423982 containerd[2489]: time="2025-11-05T15:54:32.423958538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:54:32.424332 kubelet[3926]: E1105 15:54:32.424259 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:32.676825 containerd[2489]: time="2025-11-05T15:54:32.676571271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:32.679445 containerd[2489]: time="2025-11-05T15:54:32.679252571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:54:32.679445 containerd[2489]: time="2025-11-05T15:54:32.679410773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:32.679628 kubelet[3926]: E1105 15:54:32.679589 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:32.679670 kubelet[3926]: E1105 15:54:32.679648 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:32.679924 kubelet[3926]: E1105 15:54:32.679880 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hwpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:32.681606 kubelet[3926]: E1105 15:54:32.681485 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:54:32.681777 containerd[2489]: time="2025-11-05T15:54:32.681751517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:54:32.937895 containerd[2489]: time="2025-11-05T15:54:32.937764489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:32.946301 containerd[2489]: time="2025-11-05T15:54:32.944717707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:54:32.946301 containerd[2489]: time="2025-11-05T15:54:32.944812388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:54:32.946460 kubelet[3926]: E1105 15:54:32.944985 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:32.946460 kubelet[3926]: E1105 15:54:32.945041 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:32.946460 kubelet[3926]: E1105 15:54:32.945172 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:32.946750 kubelet[3926]: E1105 15:54:32.946711 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:54:34.039625 containerd[2489]: time="2025-11-05T15:54:34.039578578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:54:34.295827 containerd[2489]: time="2025-11-05T15:54:34.295685844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:34.299777 containerd[2489]: time="2025-11-05T15:54:34.299729097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:54:34.299884 containerd[2489]: time="2025-11-05T15:54:34.299831995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:54:34.300079 kubelet[3926]: E1105 15:54:34.300043 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:34.300406 kubelet[3926]: E1105 15:54:34.300095 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:34.300406 kubelet[3926]: E1105 15:54:34.300226 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80c9a76a2ed646769b0abd352af315e9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:34.304349 containerd[2489]: time="2025-11-05T15:54:34.304267525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:54:34.560168 containerd[2489]: time="2025-11-05T15:54:34.560031058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:34.563186 containerd[2489]: time="2025-11-05T15:54:34.563122672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:54:34.563307 containerd[2489]: time="2025-11-05T15:54:34.563246747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:34.563468 kubelet[3926]: E1105 15:54:34.563429 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:34.563512 kubelet[3926]: E1105 15:54:34.563488 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:34.563677 kubelet[3926]: E1105 15:54:34.563631 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:34.565640 kubelet[3926]: E1105 15:54:34.565592 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:54:36.036810 containerd[2489]: time="2025-11-05T15:54:36.036768196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:36.280314 containerd[2489]: time="2025-11-05T15:54:36.280062051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:36.283742 containerd[2489]: time="2025-11-05T15:54:36.283672817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:36.284036 containerd[2489]: time="2025-11-05T15:54:36.283787390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:36.284269 kubelet[3926]: E1105 15:54:36.284189 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:36.284269 kubelet[3926]: E1105 15:54:36.284249 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:36.285205 kubelet[3926]: E1105 15:54:36.284767 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcbpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:36.286094 kubelet[3926]: E1105 15:54:36.286062 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:54:38.035663 containerd[2489]: time="2025-11-05T15:54:38.035612385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:38.319372 containerd[2489]: time="2025-11-05T15:54:38.318401928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:38.321986 containerd[2489]: time="2025-11-05T15:54:38.321940780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:38.322168 containerd[2489]: time="2025-11-05T15:54:38.322027535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:38.322208 kubelet[3926]: E1105 15:54:38.322144 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:38.322208 kubelet[3926]: E1105 15:54:38.322196 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:38.323375 kubelet[3926]: E1105 15:54:38.322904 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:38.330240 kubelet[3926]: E1105 15:54:38.330201 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:54:44.036564 kubelet[3926]: E1105 15:54:44.036517 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:54:45.033776 kubelet[3926]: E1105 15:54:45.033413 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:54:46.035309 kubelet[3926]: E1105 15:54:46.035211 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:54:46.036412 kubelet[3926]: E1105 15:54:46.036375 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:54:47.122937 systemd[1]: Started sshd@7-10.200.8.46:22-10.200.16.10:34368.service - OpenSSH per-connection server daemon (10.200.16.10:34368). Nov 5 15:54:47.841653 sshd[5977]: Accepted publickey for core from 10.200.16.10 port 34368 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:54:47.842990 sshd-session[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:47.847573 systemd-logind[2457]: New session 10 of user core. Nov 5 15:54:47.852552 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:54:48.404425 sshd[5981]: Connection closed by 10.200.16.10 port 34368 Nov 5 15:54:48.405478 sshd-session[5977]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:48.408739 systemd[1]: sshd@7-10.200.8.46:22-10.200.16.10:34368.service: Deactivated successfully. Nov 5 15:54:48.412379 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:54:48.414362 systemd-logind[2457]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:54:48.417118 systemd-logind[2457]: Removed session 10. Nov 5 15:54:49.033857 kubelet[3926]: E1105 15:54:49.033805 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:54:49.290382 containerd[2489]: time="2025-11-05T15:54:49.290254958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" id:\"84b23fca3fc05fc3fd7eb86083b45b605dd092bca8474ed49206a71c15ba5f2a\" pid:6005 exited_at:{seconds:1762358089 nanos:289964089}" Nov 5 15:54:51.033500 kubelet[3926]: E1105 15:54:51.033377 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:54:53.534340 systemd[1]: Started sshd@8-10.200.8.46:22-10.200.16.10:55934.service - OpenSSH per-connection server daemon (10.200.16.10:55934). Nov 5 15:54:54.258226 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 55934 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:54:54.259447 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:54.263364 systemd-logind[2457]: New session 11 of user core. Nov 5 15:54:54.268482 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:54:54.850693 sshd[6020]: Connection closed by 10.200.16.10 port 55934 Nov 5 15:54:54.851009 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:54.856196 systemd-logind[2457]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:54:54.856504 systemd[1]: sshd@8-10.200.8.46:22-10.200.16.10:55934.service: Deactivated successfully. Nov 5 15:54:54.859042 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:54:54.860564 systemd-logind[2457]: Removed session 11. Nov 5 15:54:57.036575 kubelet[3926]: E1105 15:54:57.036509 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:54:58.037625 kubelet[3926]: E1105 15:54:58.037560 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:54:59.033505 kubelet[3926]: E1105 15:54:59.033427 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:54:59.980534 systemd[1]: Started sshd@9-10.200.8.46:22-10.200.16.10:46140.service - OpenSSH per-connection server daemon (10.200.16.10:46140). Nov 5 15:55:00.700233 sshd[6035]: Accepted publickey for core from 10.200.16.10 port 46140 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:00.701511 sshd-session[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:00.705593 systemd-logind[2457]: New session 12 of user core. Nov 5 15:55:00.709468 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:55:01.035322 kubelet[3926]: E1105 15:55:01.035102 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:55:01.254723 sshd[6038]: Connection closed by 10.200.16.10 port 46140 Nov 5 15:55:01.255338 sshd-session[6035]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:01.258576 systemd[1]: sshd@9-10.200.8.46:22-10.200.16.10:46140.service: Deactivated successfully. Nov 5 15:55:01.260455 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:55:01.261857 systemd-logind[2457]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:55:01.263207 systemd-logind[2457]: Removed session 12. Nov 5 15:55:03.033745 kubelet[3926]: E1105 15:55:03.033697 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:55:04.034901 kubelet[3926]: E1105 15:55:04.034637 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:55:06.383547 systemd[1]: Started sshd@10-10.200.8.46:22-10.200.16.10:46156.service - OpenSSH per-connection server daemon (10.200.16.10:46156). Nov 5 15:55:07.099971 sshd[6053]: Accepted publickey for core from 10.200.16.10 port 46156 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:07.101156 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:07.105367 systemd-logind[2457]: New session 13 of user core. Nov 5 15:55:07.110433 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:55:07.649659 sshd[6056]: Connection closed by 10.200.16.10 port 46156 Nov 5 15:55:07.651425 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:07.654876 systemd-logind[2457]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:55:07.655243 systemd[1]: sshd@10-10.200.8.46:22-10.200.16.10:46156.service: Deactivated successfully. Nov 5 15:55:07.657181 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:55:07.659078 systemd-logind[2457]: Removed session 13. Nov 5 15:55:09.034631 kubelet[3926]: E1105 15:55:09.034480 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:55:09.035523 kubelet[3926]: E1105 15:55:09.035482 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:55:10.032683 kubelet[3926]: E1105 15:55:10.032620 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:55:12.777535 systemd[1]: Started sshd@11-10.200.8.46:22-10.200.16.10:55602.service - OpenSSH per-connection server daemon (10.200.16.10:55602). Nov 5 15:55:13.500182 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 55602 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:13.501355 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:13.505001 systemd-logind[2457]: New session 14 of user core. Nov 5 15:55:13.510412 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:55:14.047973 sshd[6079]: Connection closed by 10.200.16.10 port 55602 Nov 5 15:55:14.048616 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:14.052405 systemd[1]: sshd@11-10.200.8.46:22-10.200.16.10:55602.service: Deactivated successfully. Nov 5 15:55:14.054241 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:55:14.055034 systemd-logind[2457]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:55:14.056742 systemd-logind[2457]: Removed session 14. Nov 5 15:55:14.173162 systemd[1]: Started sshd@12-10.200.8.46:22-10.200.16.10:55612.service - OpenSSH per-connection server daemon (10.200.16.10:55612). Nov 5 15:55:14.891245 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 55612 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:14.892059 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:14.896640 systemd-logind[2457]: New session 15 of user core. Nov 5 15:55:14.904404 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:55:15.469799 sshd[6095]: Connection closed by 10.200.16.10 port 55612 Nov 5 15:55:15.471010 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:15.476290 systemd-logind[2457]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:55:15.476957 systemd[1]: sshd@12-10.200.8.46:22-10.200.16.10:55612.service: Deactivated successfully. Nov 5 15:55:15.481523 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:55:15.486039 systemd-logind[2457]: Removed session 15. Nov 5 15:55:15.591658 systemd[1]: Started sshd@13-10.200.8.46:22-10.200.16.10:55626.service - OpenSSH per-connection server daemon (10.200.16.10:55626). Nov 5 15:55:16.034922 containerd[2489]: time="2025-11-05T15:55:16.034874378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:16.288115 containerd[2489]: time="2025-11-05T15:55:16.287683933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:16.298298 containerd[2489]: time="2025-11-05T15:55:16.298094779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:16.298298 containerd[2489]: time="2025-11-05T15:55:16.298144269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:16.298630 kubelet[3926]: E1105 15:55:16.298597 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:16.298976 kubelet[3926]: E1105 15:55:16.298647 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:16.298976 kubelet[3926]: E1105 15:55:16.298813 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:16.301663 containerd[2489]: time="2025-11-05T15:55:16.301582521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:16.317612 sshd[6105]: Accepted publickey for core from 10.200.16.10 port 55626 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:16.319233 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:16.326591 systemd-logind[2457]: New session 16 of user core. Nov 5 15:55:16.331504 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:55:16.562498 containerd[2489]: time="2025-11-05T15:55:16.562367413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:16.566515 containerd[2489]: time="2025-11-05T15:55:16.566458385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:16.566630 containerd[2489]: time="2025-11-05T15:55:16.566526684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:16.566823 kubelet[3926]: E1105 15:55:16.566778 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:16.566897 kubelet[3926]: E1105 15:55:16.566835 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:16.567017 kubelet[3926]: E1105 15:55:16.566972 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfs9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nllfx_calico-system(0f742570-0c09-4ef6-8800-4cac3ba577e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:16.568405 kubelet[3926]: E1105 15:55:16.568370 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:55:16.894162 sshd[6110]: Connection closed by 10.200.16.10 port 55626 Nov 5 15:55:16.894753 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:16.898628 systemd-logind[2457]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:55:16.899272 systemd[1]: sshd@13-10.200.8.46:22-10.200.16.10:55626.service: Deactivated successfully. Nov 5 15:55:16.901252 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:55:16.902908 systemd-logind[2457]: Removed session 16. Nov 5 15:55:18.035212 kubelet[3926]: E1105 15:55:18.034530 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:55:19.035388 containerd[2489]: time="2025-11-05T15:55:19.035314183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:19.352255 containerd[2489]: time="2025-11-05T15:55:19.352189929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:19.355215 containerd[2489]: time="2025-11-05T15:55:19.355085166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:19.355215 containerd[2489]: time="2025-11-05T15:55:19.355189277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:19.356120 kubelet[3926]: E1105 15:55:19.356069 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:19.357113 kubelet[3926]: E1105 15:55:19.356489 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:19.358391 kubelet[3926]: E1105 15:55:19.357462 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcbpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-kdh6x_calico-apiserver(65694c8b-c2eb-4f3f-8724-f2d844e7483e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:19.359915 kubelet[3926]: E1105 15:55:19.359837 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:55:19.472635 containerd[2489]: time="2025-11-05T15:55:19.472587539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" id:\"7379704110d9aae64af87e4ba57910c9f64ccaca145f7a79a7def2acc6e77d0d\" pid:6133 exited_at:{seconds:1762358119 nanos:472153003}" Nov 5 15:55:21.034302 containerd[2489]: time="2025-11-05T15:55:21.034242975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:21.426435 containerd[2489]: time="2025-11-05T15:55:21.426386063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:21.429919 containerd[2489]: time="2025-11-05T15:55:21.429867979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:21.430031 containerd[2489]: time="2025-11-05T15:55:21.429960043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:21.430249 kubelet[3926]: E1105 15:55:21.430213 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:21.430617 kubelet[3926]: E1105 15:55:21.430265 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:21.430617 kubelet[3926]: E1105 15:55:21.430408 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:80c9a76a2ed646769b0abd352af315e9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:21.433009 containerd[2489]: time="2025-11-05T15:55:21.432981571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:21.779441 containerd[2489]: time="2025-11-05T15:55:21.779255948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:21.782314 containerd[2489]: time="2025-11-05T15:55:21.782239231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:21.782403 containerd[2489]: time="2025-11-05T15:55:21.782243423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:21.782570 kubelet[3926]: E1105 15:55:21.782532 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:21.782652 kubelet[3926]: E1105 15:55:21.782585 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:21.782779 kubelet[3926]: E1105 15:55:21.782737 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jz4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-845d8f4b9b-8qsjt_calico-system(80c81441-a30e-43a5-948f-5a1c2800b71c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:21.784085 kubelet[3926]: E1105 15:55:21.784014 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:55:22.024531 systemd[1]: Started sshd@14-10.200.8.46:22-10.200.16.10:55096.service - OpenSSH per-connection server daemon (10.200.16.10:55096). Nov 5 15:55:22.038171 containerd[2489]: time="2025-11-05T15:55:22.037901466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:22.327359 containerd[2489]: time="2025-11-05T15:55:22.326948454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:22.330262 containerd[2489]: time="2025-11-05T15:55:22.330134580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:22.330262 containerd[2489]: time="2025-11-05T15:55:22.330234225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:22.330894 kubelet[3926]: E1105 15:55:22.330546 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:22.330894 kubelet[3926]: E1105 15:55:22.330599 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:22.330894 kubelet[3926]: E1105 15:55:22.330762 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hwpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b6458fdcf-zcgg8_calico-system(44ab8120-7220-47ae-93cc-8b7e8505e744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:22.332270 kubelet[3926]: E1105 15:55:22.332218 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:55:22.744344 sshd[6147]: Accepted publickey for core from 10.200.16.10 port 55096 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:22.745960 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:22.751929 systemd-logind[2457]: New session 17 of user core. Nov 5 15:55:22.759397 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:55:23.034030 containerd[2489]: time="2025-11-05T15:55:23.033879097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:23.292580 containerd[2489]: time="2025-11-05T15:55:23.292337575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:23.296672 containerd[2489]: time="2025-11-05T15:55:23.296422169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:23.296825 containerd[2489]: time="2025-11-05T15:55:23.296466452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:23.296860 kubelet[3926]: E1105 15:55:23.296800 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:23.296860 kubelet[3926]: E1105 15:55:23.296850 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:23.297144 kubelet[3926]: E1105 15:55:23.297009 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b55nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xb2bl_calico-system(6345634e-739d-4c50-8a09-88a959b92cba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:23.298251 kubelet[3926]: E1105 15:55:23.298209 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:55:23.320851 sshd[6162]: Connection closed by 10.200.16.10 port 55096 Nov 5 15:55:23.321517 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:23.325801 systemd[1]: sshd@14-10.200.8.46:22-10.200.16.10:55096.service: Deactivated successfully. Nov 5 15:55:23.327346 systemd-logind[2457]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:55:23.329455 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:55:23.333920 systemd-logind[2457]: Removed session 17. Nov 5 15:55:28.446935 systemd[1]: Started sshd@15-10.200.8.46:22-10.200.16.10:55108.service - OpenSSH per-connection server daemon (10.200.16.10:55108). Nov 5 15:55:29.034891 containerd[2489]: time="2025-11-05T15:55:29.034724889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:29.163197 sshd[6188]: Accepted publickey for core from 10.200.16.10 port 55108 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:29.164441 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:29.170451 systemd-logind[2457]: New session 18 of user core. Nov 5 15:55:29.179400 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:55:29.284096 containerd[2489]: time="2025-11-05T15:55:29.284045818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:29.287153 containerd[2489]: time="2025-11-05T15:55:29.286987810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:29.287153 containerd[2489]: time="2025-11-05T15:55:29.287002164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:29.287482 kubelet[3926]: E1105 15:55:29.287214 3926 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:29.287482 kubelet[3926]: E1105 15:55:29.287271 3926 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:29.288202 kubelet[3926]: E1105 15:55:29.287600 3926 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6445c55d69-dp5nk_calico-apiserver(c790f420-0686-46b7-ac9a-3d5362dc937f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:29.288850 kubelet[3926]: E1105 15:55:29.288810 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:55:29.742951 sshd[6191]: Connection closed by 10.200.16.10 port 55108 Nov 5 15:55:29.741395 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:29.745100 systemd[1]: sshd@15-10.200.8.46:22-10.200.16.10:55108.service: Deactivated successfully. Nov 5 15:55:29.747129 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:55:29.748076 systemd-logind[2457]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:55:29.749214 systemd-logind[2457]: Removed session 18. Nov 5 15:55:30.035807 kubelet[3926]: E1105 15:55:30.035674 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:55:30.037799 kubelet[3926]: E1105 15:55:30.037236 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:55:34.864644 systemd[1]: Started sshd@16-10.200.8.46:22-10.200.16.10:40824.service - OpenSSH per-connection server daemon (10.200.16.10:40824). Nov 5 15:55:35.035893 kubelet[3926]: E1105 15:55:35.035854 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:55:35.036562 kubelet[3926]: E1105 15:55:35.036531 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:55:35.586681 sshd[6205]: Accepted publickey for core from 10.200.16.10 port 40824 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:35.587858 sshd-session[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:35.593273 systemd-logind[2457]: New session 19 of user core. Nov 5 15:55:35.604487 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:55:36.143556 sshd[6208]: Connection closed by 10.200.16.10 port 40824 Nov 5 15:55:36.144169 sshd-session[6205]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:36.148161 systemd-logind[2457]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:55:36.148330 systemd[1]: sshd@16-10.200.8.46:22-10.200.16.10:40824.service: Deactivated successfully. Nov 5 15:55:36.150563 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:55:36.152412 systemd-logind[2457]: Removed session 19. Nov 5 15:55:36.269899 systemd[1]: Started sshd@17-10.200.8.46:22-10.200.16.10:40832.service - OpenSSH per-connection server daemon (10.200.16.10:40832). Nov 5 15:55:36.986911 sshd[6220]: Accepted publickey for core from 10.200.16.10 port 40832 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:36.990324 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:36.997305 systemd-logind[2457]: New session 20 of user core. Nov 5 15:55:37.006434 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:55:37.035107 kubelet[3926]: E1105 15:55:37.035069 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:55:37.617268 sshd[6223]: Connection closed by 10.200.16.10 port 40832 Nov 5 15:55:37.620423 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:37.624951 systemd-logind[2457]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:55:37.625738 systemd[1]: sshd@17-10.200.8.46:22-10.200.16.10:40832.service: Deactivated successfully. Nov 5 15:55:37.628164 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:55:37.631344 systemd-logind[2457]: Removed session 20. Nov 5 15:55:37.743860 systemd[1]: Started sshd@18-10.200.8.46:22-10.200.16.10:40844.service - OpenSSH per-connection server daemon (10.200.16.10:40844). Nov 5 15:55:38.455214 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 40844 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:38.456703 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:38.461323 systemd-logind[2457]: New session 21 of user core. Nov 5 15:55:38.467456 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:55:39.369341 sshd[6236]: Connection closed by 10.200.16.10 port 40844 Nov 5 15:55:39.370189 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:39.376002 systemd[1]: sshd@18-10.200.8.46:22-10.200.16.10:40844.service: Deactivated successfully. Nov 5 15:55:39.379464 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:55:39.382877 systemd-logind[2457]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:55:39.384471 systemd-logind[2457]: Removed session 21. Nov 5 15:55:39.492538 systemd[1]: Started sshd@19-10.200.8.46:22-10.200.16.10:40848.service - OpenSSH per-connection server daemon (10.200.16.10:40848). Nov 5 15:55:40.213023 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 40848 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:40.214209 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:40.222577 systemd-logind[2457]: New session 22 of user core. Nov 5 15:55:40.226482 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:55:40.889310 sshd[6256]: Connection closed by 10.200.16.10 port 40848 Nov 5 15:55:40.890483 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:40.898215 systemd[1]: sshd@19-10.200.8.46:22-10.200.16.10:40848.service: Deactivated successfully. Nov 5 15:55:40.902563 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:55:40.905548 systemd-logind[2457]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:55:40.908895 systemd-logind[2457]: Removed session 22. Nov 5 15:55:41.011686 systemd[1]: Started sshd@20-10.200.8.46:22-10.200.16.10:38086.service - OpenSSH per-connection server daemon (10.200.16.10:38086). Nov 5 15:55:41.034592 kubelet[3926]: E1105 15:55:41.034506 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:55:41.720382 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 38086 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:41.722883 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:41.727913 systemd-logind[2457]: New session 23 of user core. Nov 5 15:55:41.734439 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:55:42.037686 kubelet[3926]: E1105 15:55:42.037557 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:55:42.269396 sshd[6269]: Connection closed by 10.200.16.10 port 38086 Nov 5 15:55:42.271376 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:42.275013 systemd-logind[2457]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:55:42.275515 systemd[1]: sshd@20-10.200.8.46:22-10.200.16.10:38086.service: Deactivated successfully. Nov 5 15:55:42.277439 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:55:42.279384 systemd-logind[2457]: Removed session 23. Nov 5 15:55:44.034827 kubelet[3926]: E1105 15:55:44.034739 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:55:46.037305 kubelet[3926]: E1105 15:55:46.036664 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:55:47.391920 systemd[1]: Started sshd@21-10.200.8.46:22-10.200.16.10:38102.service - OpenSSH per-connection server daemon (10.200.16.10:38102). Nov 5 15:55:48.035149 kubelet[3926]: E1105 15:55:48.035096 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:55:48.105079 sshd[6283]: Accepted publickey for core from 10.200.16.10 port 38102 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:48.107337 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:48.113437 systemd-logind[2457]: New session 24 of user core. Nov 5 15:55:48.120448 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:55:48.649188 sshd[6286]: Connection closed by 10.200.16.10 port 38102 Nov 5 15:55:48.648362 sshd-session[6283]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:48.653065 systemd[1]: sshd@21-10.200.8.46:22-10.200.16.10:38102.service: Deactivated successfully. Nov 5 15:55:48.654999 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:55:48.655970 systemd-logind[2457]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:55:48.657621 systemd-logind[2457]: Removed session 24. Nov 5 15:55:49.349764 containerd[2489]: time="2025-11-05T15:55:49.349690238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de270d80edfb2f105c65d94d8b7484d7022532fb662062593cec7289746a639d\" id:\"4c9879307e6d79eec081c6796a3f50f51045c6508e9dd58e562f6212e74e8a2e\" pid:6310 exited_at:{seconds:1762358149 nanos:348374868}" Nov 5 15:55:52.034796 kubelet[3926]: E1105 15:55:52.034418 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:55:52.034796 kubelet[3926]: E1105 15:55:52.034606 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:55:53.034429 kubelet[3926]: E1105 15:55:53.034336 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:55:53.776590 systemd[1]: Started sshd@22-10.200.8.46:22-10.200.16.10:39872.service - OpenSSH per-connection server daemon (10.200.16.10:39872). Nov 5 15:55:54.497567 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 39872 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:55:54.499596 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:54.506469 systemd-logind[2457]: New session 25 of user core. Nov 5 15:55:54.513655 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:55:55.033489 kubelet[3926]: E1105 15:55:55.033435 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e" Nov 5 15:55:55.065174 sshd[6325]: Connection closed by 10.200.16.10 port 39872 Nov 5 15:55:55.066249 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:55.071618 systemd[1]: sshd@22-10.200.8.46:22-10.200.16.10:39872.service: Deactivated successfully. Nov 5 15:55:55.074874 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:55:55.076703 systemd-logind[2457]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:55:55.079061 systemd-logind[2457]: Removed session 25. Nov 5 15:55:58.035446 kubelet[3926]: E1105 15:55:58.035180 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-845d8f4b9b-8qsjt" podUID="80c81441-a30e-43a5-948f-5a1c2800b71c" Nov 5 15:55:59.033595 kubelet[3926]: E1105 15:55:59.033548 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xb2bl" podUID="6345634e-739d-4c50-8a09-88a959b92cba" Nov 5 15:56:00.195586 systemd[1]: Started sshd@23-10.200.8.46:22-10.200.16.10:58902.service - OpenSSH per-connection server daemon (10.200.16.10:58902). Nov 5 15:56:00.913835 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 58902 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:56:00.915693 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:00.923659 systemd-logind[2457]: New session 26 of user core. Nov 5 15:56:00.930475 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:56:01.466670 sshd[6342]: Connection closed by 10.200.16.10 port 58902 Nov 5 15:56:01.467677 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:01.473106 systemd-logind[2457]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:56:01.473257 systemd[1]: sshd@23-10.200.8.46:22-10.200.16.10:58902.service: Deactivated successfully. Nov 5 15:56:01.475223 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:56:01.477174 systemd-logind[2457]: Removed session 26. Nov 5 15:56:05.033248 kubelet[3926]: E1105 15:56:05.033030 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nllfx" podUID="0f742570-0c09-4ef6-8800-4cac3ba577e3" Nov 5 15:56:06.593615 systemd[1]: Started sshd@24-10.200.8.46:22-10.200.16.10:58904.service - OpenSSH per-connection server daemon (10.200.16.10:58904). Nov 5 15:56:07.033858 kubelet[3926]: E1105 15:56:07.033672 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-dp5nk" podUID="c790f420-0686-46b7-ac9a-3d5362dc937f" Nov 5 15:56:07.033858 kubelet[3926]: E1105 15:56:07.033715 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b6458fdcf-zcgg8" podUID="44ab8120-7220-47ae-93cc-8b7e8505e744" Nov 5 15:56:07.317292 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 58904 ssh2: RSA SHA256:A7DyAFZ/mbCLTbAUB/QpnP/Tk45T1OrNVW4dDpqSJVU Nov 5 15:56:07.318229 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:07.325806 systemd-logind[2457]: New session 27 of user core. Nov 5 15:56:07.332497 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:56:07.873823 sshd[6361]: Connection closed by 10.200.16.10 port 58904 Nov 5 15:56:07.874393 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:07.877852 systemd[1]: sshd@24-10.200.8.46:22-10.200.16.10:58904.service: Deactivated successfully. Nov 5 15:56:07.879677 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:56:07.880440 systemd-logind[2457]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:56:07.881705 systemd-logind[2457]: Removed session 27. Nov 5 15:56:08.034923 kubelet[3926]: E1105 15:56:08.034886 3926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6445c55d69-kdh6x" podUID="65694c8b-c2eb-4f3f-8724-f2d844e7483e"