Dec 16 13:04:24.997121 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:04:24.997152 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:24.997165 kernel: BIOS-provided physical RAM map: Dec 16 13:04:24.997173 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:04:24.997181 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:04:24.997188 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:04:24.997197 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:04:24.997204 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:04:24.997212 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:04:24.997222 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:04:24.997229 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:04:24.997236 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:04:24.997243 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:04:24.997250 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:04:24.997258 kernel: NX (Execute Disable) protection: active Dec 16 13:04:24.997271 kernel: APIC: Static calls initialized Dec 16 13:04:24.997280 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:04:24.997289 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 16 13:04:24.997297 kernel: random: crng init done Dec 16 13:04:24.997306 kernel: secureboot: Secure boot disabled Dec 16 13:04:24.997314 kernel: SMBIOS 3.1.0 present. Dec 16 13:04:24.997323 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:04:24.997331 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:04:24.997339 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:04:24.997347 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:04:24.997355 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:04:24.997367 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:04:24.997374 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:04:24.997382 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:24.997391 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:24.997398 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:04:24.997407 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:04:24.997416 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:04:24.997424 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:04:24.997434 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:04:24.997442 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:04:24.997452 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:04:24.997460 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:04:24.997468 kernel: Using GB pages for direct mapping Dec 16 13:04:24.997476 kernel: ACPI: Early table checksum verification disabled Dec 16 13:04:24.997488 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:04:24.997496 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998628 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998645 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:04:24.998667 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:04:24.998676 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998685 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998693 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998701 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:24.998712 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:24.998719 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:24.998727 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:04:24.998735 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:04:24.998743 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:04:24.998750 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:04:24.998758 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:04:24.998766 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:04:24.998774 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:04:24.998784 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:04:24.998792 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:04:24.998800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:04:24.998807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:04:24.998816 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:04:24.998823 kernel: NODE_DATA(0) allocated [mem 0x2bfff6dc0-0x2bfffdfff] Dec 16 13:04:24.998832 kernel: Zone ranges: Dec 16 13:04:24.998840 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:04:24.998848 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:04:24.998858 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:24.998866 kernel: Device empty Dec 16 13:04:24.998873 kernel: Movable zone start for each node Dec 16 13:04:24.998881 kernel: Early memory node ranges Dec 16 13:04:24.998888 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:04:24.998896 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:04:24.998904 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:04:24.998912 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:04:24.998919 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:24.998929 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:04:24.998938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:04:24.998945 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:04:24.998953 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:04:24.998960 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:04:24.998967 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:04:24.998976 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:04:24.998984 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:04:24.998992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:04:24.999001 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:04:24.999009 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:04:24.999017 kernel: TSC deadline timer available Dec 16 13:04:24.999025 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:04:24.999032 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:04:24.999040 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:04:24.999047 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:04:24.999055 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:04:24.999063 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:04:24.999071 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:04:24.999081 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:04:24.999088 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:04:24.999096 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:04:24.999103 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:04:24.999111 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:04:24.999118 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:04:24.999126 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:04:24.999134 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:04:24.999144 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:04:24.999154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:24.999163 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:04:24.999170 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:04:24.999178 kernel: Fallback order for Node 0: 0 Dec 16 13:04:24.999185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:04:24.999193 kernel: Policy zone: Normal Dec 16 13:04:24.999200 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:04:24.999208 kernel: software IO TLB: area num 2. Dec 16 13:04:24.999218 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:04:24.999226 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:04:24.999234 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:04:24.999242 kernel: Dynamic Preempt: voluntary Dec 16 13:04:24.999249 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:04:24.999258 kernel: rcu: RCU event tracing is enabled. Dec 16 13:04:24.999272 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:04:24.999282 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:04:24.999291 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:04:24.999300 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:04:24.999309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:04:24.999318 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:04:24.999327 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:24.999335 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:24.999391 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:24.999400 kernel: Using NULL legacy PIC Dec 16 13:04:24.999409 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:04:24.999417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:04:24.999426 kernel: Console: colour dummy device 80x25 Dec 16 13:04:24.999434 kernel: printk: legacy console [tty1] enabled Dec 16 13:04:24.999442 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:04:24.999451 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:04:24.999459 kernel: ACPI: Core revision 20240827 Dec 16 13:04:24.999468 kernel: Failed to register legacy timer interrupt Dec 16 13:04:24.999477 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:04:24.999487 kernel: x2apic enabled Dec 16 13:04:24.999495 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:04:24.999504 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:04:24.999511 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:04:24.999520 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:04:24.999527 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:04:24.999535 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:04:24.999542 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:04:24.999551 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:04:24.999562 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:04:24.999570 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:04:24.999579 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:04:24.999589 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:04:24.999597 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:04:24.999605 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:04:24.999613 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:04:24.999621 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:04:24.999628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:04:24.999637 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:04:24.999644 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:04:24.999715 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:04:24.999724 kernel: RETBleed: Vulnerable Dec 16 13:04:24.999732 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:04:24.999741 kernel: active return thunk: its_return_thunk Dec 16 13:04:24.999749 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:04:24.999759 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:04:24.999767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:04:24.999775 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:04:24.999784 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:04:24.999795 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:04:24.999803 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:04:24.999812 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:04:24.999820 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:04:24.999828 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:04:24.999837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:04:24.999845 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:04:24.999853 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:04:24.999886 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:04:24.999896 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:04:24.999905 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:04:24.999915 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:04:24.999923 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:04:24.999932 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:04:24.999940 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:04:24.999948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:04:24.999956 kernel: landlock: Up and running. Dec 16 13:04:24.999965 kernel: SELinux: Initializing. Dec 16 13:04:24.999980 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:24.999989 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:24.999998 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:04:25.000008 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:04:25.000016 kernel: signal: max sigframe size: 11952 Dec 16 13:04:25.000026 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:04:25.000035 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:04:25.000045 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:04:25.000054 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:04:25.000063 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:04:25.000072 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:04:25.000081 kernel: .... node #0, CPUs: #1 Dec 16 13:04:25.000089 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:04:25.000098 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:04:25.000109 kernel: Memory: 8069080K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308188K reserved, 0K cma-reserved) Dec 16 13:04:25.000118 kernel: devtmpfs: initialized Dec 16 13:04:25.000126 kernel: x86/mm: Memory block size: 128MB Dec 16 13:04:25.000135 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:04:25.000144 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:04:25.000153 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:04:25.000162 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:04:25.000171 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:04:25.000179 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:04:25.000190 kernel: audit: type=2000 audit(1765890261.079:1): state=initialized audit_enabled=0 res=1 Dec 16 13:04:25.000199 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:04:25.000208 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:04:25.000217 kernel: cpuidle: using governor menu Dec 16 13:04:25.000226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:04:25.000234 kernel: dca service started, version 1.12.1 Dec 16 13:04:25.000242 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:04:25.000251 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:04:25.000261 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:04:25.000269 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:04:25.000278 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:04:25.000287 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:04:25.000296 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:04:25.000305 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:04:25.000313 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:04:25.000322 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:04:25.000330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:04:25.000340 kernel: ACPI: Interpreter enabled Dec 16 13:04:25.000348 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:04:25.000356 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:04:25.000365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:04:25.000374 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:04:25.000383 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:04:25.000392 kernel: iommu: Default domain type: Translated Dec 16 13:04:25.000400 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:04:25.000409 kernel: efivars: Registered efivars operations Dec 16 13:04:25.000419 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:04:25.000427 kernel: PCI: System does not support PCI Dec 16 13:04:25.000436 kernel: vgaarb: loaded Dec 16 13:04:25.000445 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:04:25.000453 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:04:25.000462 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:04:25.000471 kernel: pnp: PnP ACPI init Dec 16 13:04:25.000480 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:04:25.000488 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:04:25.000496 kernel: NET: Registered PF_INET protocol family Dec 16 13:04:25.000507 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:04:25.000516 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:04:25.000525 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:04:25.000534 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:04:25.000543 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:04:25.000552 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:04:25.000560 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:25.000569 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:25.000579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:04:25.000587 kernel: NET: Registered PF_XDP protocol family Dec 16 13:04:25.000596 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:04:25.000605 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:04:25.000613 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Dec 16 13:04:25.000622 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:04:25.000631 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:04:25.000640 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:04:25.000649 kernel: clocksource: Switched to clocksource tsc Dec 16 13:04:25.000669 kernel: Initialise system trusted keyrings Dec 16 13:04:25.000678 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:04:25.000687 kernel: Key type asymmetric registered Dec 16 13:04:25.000696 kernel: Asymmetric key parser 'x509' registered Dec 16 13:04:25.000705 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:04:25.000714 kernel: io scheduler mq-deadline registered Dec 16 13:04:25.000723 kernel: io scheduler kyber registered Dec 16 13:04:25.000731 kernel: io scheduler bfq registered Dec 16 13:04:25.000739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:04:25.000750 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:04:25.000758 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:25.000766 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:04:25.000776 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:25.000784 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:04:25.000931 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:04:25.001007 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:04:24 UTC (1765890264) Dec 16 13:04:25.001076 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:04:25.001089 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:04:25.001098 kernel: efifb: probing for efifb Dec 16 13:04:25.001108 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:04:25.001117 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:04:25.001126 kernel: efifb: scrolling: redraw Dec 16 13:04:25.001135 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:04:25.001143 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:25.001152 kernel: fb0: EFI VGA frame buffer device Dec 16 13:04:25.001160 kernel: pstore: Using crash dump compression: deflate Dec 16 13:04:25.001170 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:04:25.001179 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:04:25.001188 kernel: Segment Routing with IPv6 Dec 16 13:04:25.001196 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:04:25.001205 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:04:25.001214 kernel: Key type dns_resolver registered Dec 16 13:04:25.001223 kernel: IPI shorthand broadcast: enabled Dec 16 13:04:25.001231 kernel: sched_clock: Marking stable (3079006020, 103075220)->(3543073126, -360991886) Dec 16 13:04:25.001239 kernel: registered taskstats version 1 Dec 16 13:04:25.001250 kernel: Loading compiled-in X.509 certificates Dec 16 13:04:25.001259 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:04:25.001268 kernel: Demotion targets for Node 0: null Dec 16 13:04:25.001277 kernel: Key type .fscrypt registered Dec 16 13:04:25.001286 kernel: Key type fscrypt-provisioning registered Dec 16 13:04:25.001294 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:04:25.001302 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:04:25.001310 kernel: ima: No architecture policies found Dec 16 13:04:25.001319 kernel: clk: Disabling unused clocks Dec 16 13:04:25.001329 kernel: Warning: unable to open an initial console. Dec 16 13:04:25.001339 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:04:25.001348 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:04:25.001357 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:04:25.001365 kernel: Run /init as init process Dec 16 13:04:25.001374 kernel: with arguments: Dec 16 13:04:25.001382 kernel: /init Dec 16 13:04:25.001390 kernel: with environment: Dec 16 13:04:25.001398 kernel: HOME=/ Dec 16 13:04:25.001408 kernel: TERM=linux Dec 16 13:04:25.001418 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:04:25.001431 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:25.001441 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:25.001451 systemd[1]: Detected architecture x86-64. Dec 16 13:04:25.001459 systemd[1]: Running in initrd. Dec 16 13:04:25.001468 systemd[1]: No hostname configured, using default hostname. Dec 16 13:04:25.001522 systemd[1]: Hostname set to . Dec 16 13:04:25.001531 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:25.001540 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:04:25.001551 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:25.001562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:25.001572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:04:25.001582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:25.001591 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:04:25.001603 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:04:25.001614 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:04:25.001625 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:04:25.001636 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:25.001647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:25.001674 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:25.001685 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:25.001697 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:25.001707 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:25.001716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:25.001725 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:25.001735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:04:25.001744 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:04:25.001753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:25.001761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:25.001770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:25.001780 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:25.001790 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:04:25.001800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:25.001809 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:04:25.001818 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:04:25.001828 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:04:25.001838 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:25.001849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:25.001870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:25.001884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:25.001894 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:25.001926 systemd-journald[186]: Collecting audit messages is disabled. Dec 16 13:04:25.001951 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:04:25.001962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:25.001973 systemd-journald[186]: Journal started Dec 16 13:04:25.002003 systemd-journald[186]: Runtime Journal (/run/log/journal/9bad3eadecbd459f9e0b26aef0bcf41d) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:25.006563 systemd-modules-load[187]: Inserted module 'overlay' Dec 16 13:04:25.009572 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:25.014763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:04:25.021769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:25.032756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:25.046533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:04:25.045878 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:25.051272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:25.059644 kernel: Bridge firewalling registered Dec 16 13:04:25.059719 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 16 13:04:25.060615 systemd-tmpfiles[202]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:04:25.063553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:25.066179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:25.069955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:25.077757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:25.085361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:04:25.092256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:25.094417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:25.104609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:25.114181 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:25.156313 systemd-resolved[230]: Positive Trust Anchors: Dec 16 13:04:25.157705 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:25.157742 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:25.160262 systemd-resolved[230]: Defaulting to hostname 'linux'. Dec 16 13:04:25.161523 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:25.162967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:25.209676 kernel: SCSI subsystem initialized Dec 16 13:04:25.218669 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:04:25.228678 kernel: iscsi: registered transport (tcp) Dec 16 13:04:25.247031 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:04:25.247073 kernel: QLogic iSCSI HBA Driver Dec 16 13:04:25.260893 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:25.274067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:25.280580 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:25.312253 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:25.313719 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:04:25.377680 kernel: raid6: avx512x4 gen() 42341 MB/s Dec 16 13:04:25.395666 kernel: raid6: avx512x2 gen() 41458 MB/s Dec 16 13:04:25.413665 kernel: raid6: avx512x1 gen() 25176 MB/s Dec 16 13:04:25.432665 kernel: raid6: avx2x4 gen() 35283 MB/s Dec 16 13:04:25.450666 kernel: raid6: avx2x2 gen() 37628 MB/s Dec 16 13:04:25.469109 kernel: raid6: avx2x1 gen() 28505 MB/s Dec 16 13:04:25.469125 kernel: raid6: using algorithm avx512x4 gen() 42341 MB/s Dec 16 13:04:25.489112 kernel: raid6: .... xor() 7322 MB/s, rmw enabled Dec 16 13:04:25.489134 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:04:25.507674 kernel: xor: automatically using best checksumming function avx Dec 16 13:04:25.630681 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:04:25.636361 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:25.638787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:25.656409 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 16 13:04:25.660702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:25.669532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:04:25.701343 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Dec 16 13:04:25.720974 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:25.724776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:25.759612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:25.766889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:04:25.816800 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:04:25.825688 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:04:25.834575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:25.843689 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:04:25.834721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:25.839746 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:25.853765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:25.866726 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:04:25.866747 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:04:25.866758 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 16 13:04:25.866769 kernel: AES CTR mode by8 optimization enabled Dec 16 13:04:25.878192 kernel: PTP clock support registered Dec 16 13:04:25.883474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:25.883561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:25.891857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:25.909650 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:04:25.909701 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:04:25.914020 kernel: scsi host0: storvsc_host_t Dec 16 13:04:25.915772 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:04:25.930795 kernel: hv_netvsc f8615163-0000-1000-2000-00224883cf90 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:04:25.951700 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:04:25.954673 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:04:25.960700 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:25.962107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:25.976969 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:04:25.977103 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:04:25.977115 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:04:25.977125 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:04:25.981696 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:04:25.981731 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:25.986678 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 16 13:04:25.986712 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:04:25.990221 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:04:25.992765 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:04:25.999224 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:04:25.999255 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:04:26.006448 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:04:26.006595 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:04:26.006603 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:04:26.057071 systemd-resolved[230]: Clock change detected. Flushing caches. Dec 16 13:04:26.061557 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:04:26.093883 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:26.094077 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:04:26.108503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:26.116200 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:04:26.119433 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:26.139502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:26.277498 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:04:26.289503 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:26.700509 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:04:26.891173 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:04:26.936947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:26.940879 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:26.953223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:26.959165 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:04:27.001493 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:27.008573 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:04:27.008743 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:04:27.011042 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:27.017781 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:04:27.021628 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:04:27.027516 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:04:27.029906 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:04:27.051523 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:27.051727 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:04:27.056025 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:04:27.064331 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:27.078509 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:04:27.082063 kernel: hv_netvsc f8615163-0000-1000-2000-00224883cf90 eth0: VF registering: eth1 Dec 16 13:04:27.082243 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:04:27.085500 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:04:27.240684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:04:27.324921 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:27.325453 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:27.325557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:27.325703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:27.330158 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:04:27.353624 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:27.996586 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:27.996979 disk-uuid[647]: The operation has completed successfully. Dec 16 13:04:28.056996 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:04:28.057099 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:04:28.091933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:04:28.109758 sh[697]: Success Dec 16 13:04:28.141746 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:04:28.141808 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:04:28.143331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:04:28.153559 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:04:28.425179 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:04:28.429437 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:04:28.447027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:04:28.457529 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (710) Dec 16 13:04:28.460545 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:04:28.460637 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:28.766900 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:04:28.767108 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:04:28.767175 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:04:28.802980 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:04:28.805955 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:28.809636 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:04:28.810274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:04:28.822058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:04:28.849520 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (745) Dec 16 13:04:28.855251 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:28.855293 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:28.879883 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:28.879941 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:28.881234 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:28.886602 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:28.887343 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:04:28.892617 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:04:28.906497 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:28.908881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:28.942564 systemd-networkd[879]: lo: Link UP Dec 16 13:04:28.942573 systemd-networkd[879]: lo: Gained carrier Dec 16 13:04:28.943602 systemd-networkd[879]: Enumeration completed Dec 16 13:04:28.952618 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:28.953050 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:28.953601 kernel: hv_netvsc f8615163-0000-1000-2000-00224883cf90 eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:28.943987 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:28.943990 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:28.944579 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:28.950614 systemd[1]: Reached target network.target - Network. Dec 16 13:04:28.955274 systemd-networkd[879]: enP30832s1: Link UP Dec 16 13:04:28.955464 systemd-networkd[879]: eth0: Link UP Dec 16 13:04:28.955883 systemd-networkd[879]: eth0: Gained carrier Dec 16 13:04:28.955895 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:28.958097 systemd-networkd[879]: enP30832s1: Gained carrier Dec 16 13:04:28.975427 systemd-networkd[879]: eth0: DHCPv4 address 10.200.0.43/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:29.984803 ignition[862]: Ignition 2.22.0 Dec 16 13:04:29.984815 ignition[862]: Stage: fetch-offline Dec 16 13:04:29.986572 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:29.984922 ignition[862]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:29.984928 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:29.993625 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:04:29.985012 ignition[862]: parsed url from cmdline: "" Dec 16 13:04:29.985015 ignition[862]: no config URL provided Dec 16 13:04:29.985019 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:29.985025 ignition[862]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:29.985029 ignition[862]: failed to fetch config: resource requires networking Dec 16 13:04:29.985271 ignition[862]: Ignition finished successfully Dec 16 13:04:30.020261 ignition[889]: Ignition 2.22.0 Dec 16 13:04:30.020273 ignition[889]: Stage: fetch Dec 16 13:04:30.020465 ignition[889]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:30.020472 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:30.020588 ignition[889]: parsed url from cmdline: "" Dec 16 13:04:30.020591 ignition[889]: no config URL provided Dec 16 13:04:30.020602 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:30.020608 ignition[889]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:30.020629 ignition[889]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:04:30.090848 ignition[889]: GET result: OK Dec 16 13:04:30.090922 ignition[889]: config has been read from IMDS userdata Dec 16 13:04:30.090952 ignition[889]: parsing config with SHA512: 24a5c0a591406a654924a63f539b95c1951fc33690c625234585df64822079e065286abb4ae2d3743844600b64c8aba34c0a75d856296e7153b49a85a3098433 Dec 16 13:04:30.097617 unknown[889]: fetched base config from "system" Dec 16 13:04:30.097627 unknown[889]: fetched base config from "system" Dec 16 13:04:30.097964 ignition[889]: fetch: fetch complete Dec 16 13:04:30.097632 unknown[889]: fetched user config from "azure" Dec 16 13:04:30.097975 ignition[889]: fetch: fetch passed Dec 16 13:04:30.100225 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:04:30.098014 ignition[889]: Ignition finished successfully Dec 16 13:04:30.102605 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:04:30.129971 ignition[895]: Ignition 2.22.0 Dec 16 13:04:30.129982 ignition[895]: Stage: kargs Dec 16 13:04:30.130208 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:30.133178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:04:30.130215 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:30.137843 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:04:30.131361 ignition[895]: kargs: kargs passed Dec 16 13:04:30.131408 ignition[895]: Ignition finished successfully Dec 16 13:04:30.159496 ignition[901]: Ignition 2.22.0 Dec 16 13:04:30.160401 ignition[901]: Stage: disks Dec 16 13:04:30.160640 ignition[901]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:30.162100 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:04:30.160648 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:30.163063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:30.161177 ignition[901]: disks: disks passed Dec 16 13:04:30.164254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:04:30.161205 ignition[901]: Ignition finished successfully Dec 16 13:04:30.164288 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:30.164605 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:30.164628 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:30.165564 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:04:30.252088 systemd-fsck[910]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 13:04:30.256305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:04:30.263560 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:04:30.539516 systemd-networkd[879]: eth0: Gained IPv6LL Dec 16 13:04:30.543261 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:04:30.542855 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:04:30.544320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:04:30.564345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:04:30.569618 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:04:30.584675 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:04:30.589916 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:04:30.589952 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:04:30.601867 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (919) Dec 16 13:04:30.601890 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:30.601902 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:30.602538 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:04:30.605686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:04:30.612555 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:30.612584 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:30.612595 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:30.614593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:04:31.080867 coreos-metadata[921]: Dec 16 13:04:31.080 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:04:31.087981 coreos-metadata[921]: Dec 16 13:04:31.087 INFO Fetch successful Dec 16 13:04:31.087981 coreos-metadata[921]: Dec 16 13:04:31.087 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:04:31.097945 coreos-metadata[921]: Dec 16 13:04:31.097 INFO Fetch successful Dec 16 13:04:31.113900 coreos-metadata[921]: Dec 16 13:04:31.113 INFO wrote hostname ci-4459.2.2-a-efe6a0b1f4 to /sysroot/etc/hostname Dec 16 13:04:31.117605 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:04:31.391385 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:04:31.429035 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:04:31.448858 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:04:31.453826 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:04:32.344125 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:04:32.348758 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:04:32.353179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:04:32.373386 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:04:32.376245 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:32.393014 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:04:32.404049 ignition[1038]: INFO : Ignition 2.22.0 Dec 16 13:04:32.404049 ignition[1038]: INFO : Stage: mount Dec 16 13:04:32.406262 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:32.406262 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:32.406262 ignition[1038]: INFO : mount: mount passed Dec 16 13:04:32.406262 ignition[1038]: INFO : Ignition finished successfully Dec 16 13:04:32.408418 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:04:32.411979 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:04:32.430804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:04:32.454868 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1050) Dec 16 13:04:32.454905 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:32.455499 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:32.462013 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:32.462049 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:32.464019 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:32.465567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:04:32.497472 ignition[1067]: INFO : Ignition 2.22.0 Dec 16 13:04:32.497472 ignition[1067]: INFO : Stage: files Dec 16 13:04:32.501463 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:32.501463 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:32.501463 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:04:32.508218 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:04:32.508218 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:04:32.586617 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:04:32.588844 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:04:32.588844 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:04:32.586998 unknown[1067]: wrote ssh authorized keys file for user: core Dec 16 13:04:32.603309 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:04:32.607552 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:04:32.642499 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:04:32.727444 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:32.732619 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:32.760517 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:04:33.071445 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:04:33.316105 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:33.316105 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:04:33.364642 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:33.374188 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:33.374188 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:04:33.374188 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:33.387571 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:33.387571 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:33.387571 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:33.387571 ignition[1067]: INFO : files: files passed Dec 16 13:04:33.387571 ignition[1067]: INFO : Ignition finished successfully Dec 16 13:04:33.376055 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:04:33.379560 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:04:33.395597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:04:33.405663 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:04:33.413574 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:04:33.422502 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:33.422502 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:33.428597 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:33.431843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:33.432363 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:04:33.436473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:04:33.488912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:04:33.489003 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:04:33.493931 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:04:33.499260 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:04:33.503735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:04:33.504315 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:04:33.519134 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:33.521596 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:04:33.534855 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:33.535021 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:33.535280 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:04:33.535662 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:04:33.535782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:33.544919 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:04:33.547310 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:04:33.551634 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:04:33.554441 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:04:33.559662 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:33.564642 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:33.568634 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:04:33.573636 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:33.577649 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:04:33.581646 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:04:33.584393 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:04:33.588611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:04:33.588752 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:33.592802 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:33.595813 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:33.600598 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:04:33.600851 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:33.613609 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:04:33.613730 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:33.616498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:04:33.616598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:33.619768 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:04:33.619891 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:04:33.620604 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:04:33.620735 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:04:33.622679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:04:33.624581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:04:33.624696 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:04:33.624829 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:33.627609 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:04:33.627748 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:33.633017 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:04:33.633102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:04:33.684193 ignition[1121]: INFO : Ignition 2.22.0 Dec 16 13:04:33.684193 ignition[1121]: INFO : Stage: umount Dec 16 13:04:33.688335 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:33.688335 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:33.688335 ignition[1121]: INFO : umount: umount passed Dec 16 13:04:33.688335 ignition[1121]: INFO : Ignition finished successfully Dec 16 13:04:33.684656 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:04:33.687187 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:04:33.687289 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:04:33.689038 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:04:33.689114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:04:33.690439 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:04:33.690495 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:04:33.694552 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:04:33.694588 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:04:33.698082 systemd[1]: Stopped target network.target - Network. Dec 16 13:04:33.699081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:04:33.699120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:33.702069 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:04:33.704529 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:04:33.707519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:33.710531 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:04:33.714535 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:04:33.718556 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:04:33.718593 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:33.722551 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:04:33.722585 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:33.723917 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:04:33.723961 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:04:33.724531 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:04:33.724560 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:33.724845 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:04:33.725119 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:04:33.737299 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:04:33.737394 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:04:33.742276 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:04:33.742468 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:04:33.742591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:04:33.748640 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:04:33.749142 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:04:33.752654 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:04:33.832860 kernel: hv_netvsc f8615163-0000-1000-2000-00224883cf90 eth0: Data path switched from VF: enP30832s1 Dec 16 13:04:33.752686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:33.757130 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:04:33.760535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:04:33.760590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:33.765986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:04:33.766030 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:33.772780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:04:33.772820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:33.777293 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:04:33.777339 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:33.781814 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:33.857567 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:33.790391 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:04:33.790451 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:04:33.794825 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:04:33.796545 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:33.797674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:04:33.797746 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:33.802574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:04:33.802603 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:33.806556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:04:33.806599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:33.808204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:04:33.808247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:33.817566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:04:33.817617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:33.822416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:04:33.828357 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:04:33.828412 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:33.835462 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:04:33.835519 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:33.861540 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:04:33.861578 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:33.864959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:04:33.864995 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:33.870551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:33.870588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:33.874996 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:04:33.875042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:04:33.875072 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:04:33.875106 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:04:33.875442 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:04:33.875532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:04:33.877205 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:04:33.877274 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:04:33.881701 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:04:33.881770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:04:33.886342 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:04:33.886902 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:04:33.886972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:04:33.888990 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:04:33.910892 systemd[1]: Switching root. Dec 16 13:04:33.988923 systemd-journald[186]: Journal stopped Dec 16 13:04:38.177607 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 16 13:04:38.177645 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:04:38.177662 kernel: SELinux: policy capability open_perms=1 Dec 16 13:04:38.177670 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:04:38.177678 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:04:38.177686 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:04:38.177696 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:04:38.177706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:04:38.177717 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:04:38.177726 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:04:38.177734 kernel: audit: type=1403 audit(1765890275.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:04:38.177744 systemd[1]: Successfully loaded SELinux policy in 201.330ms. Dec 16 13:04:38.177755 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.217ms. Dec 16 13:04:38.177766 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:38.177779 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:38.177789 systemd[1]: Detected architecture x86-64. Dec 16 13:04:38.177798 systemd[1]: Detected first boot. Dec 16 13:04:38.177807 systemd[1]: Hostname set to . Dec 16 13:04:38.177817 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:38.177828 zram_generator::config[1164]: No configuration found. Dec 16 13:04:38.177841 kernel: Guest personality initialized and is inactive Dec 16 13:04:38.177850 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:04:38.177859 kernel: Initialized host personality Dec 16 13:04:38.177868 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:04:38.177877 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:04:38.177887 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:04:38.177896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:04:38.177906 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:04:38.177918 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:38.177928 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:04:38.177938 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:04:38.177947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:04:38.177957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:04:38.177967 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:04:38.177977 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:04:38.177990 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:04:38.178000 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:04:38.178010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:38.178019 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:38.178029 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:04:38.178044 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:04:38.178055 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:04:38.178066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:38.178078 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:04:38.178088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:38.178098 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:38.178108 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:04:38.178118 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:04:38.178129 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:04:38.178140 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:04:38.178151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:38.178161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:38.178171 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:38.178180 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:38.178191 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:04:38.178202 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:04:38.178215 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:04:38.178225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:38.178235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:38.178245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:38.178255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:04:38.178267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:04:38.178278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:04:38.178290 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:04:38.178300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:38.178310 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:04:38.178319 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:04:38.178330 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:04:38.178341 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:04:38.178352 systemd[1]: Reached target machines.target - Containers. Dec 16 13:04:38.178363 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:04:38.178372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:38.178384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:38.178394 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:04:38.178404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:38.178413 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:38.178423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:38.178433 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:04:38.178442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:38.178452 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:04:38.178464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:04:38.178474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:04:38.178496 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:04:38.178507 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:04:38.178518 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:38.178530 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:38.178540 kernel: loop: module loaded Dec 16 13:04:38.178549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:38.178561 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:38.178571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:04:38.178581 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:04:38.178592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:38.178602 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:04:38.178635 systemd-journald[1257]: Collecting audit messages is disabled. Dec 16 13:04:38.178662 systemd[1]: Stopped verity-setup.service. Dec 16 13:04:38.178673 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:38.178684 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:04:38.178695 systemd-journald[1257]: Journal started Dec 16 13:04:38.178720 systemd-journald[1257]: Runtime Journal (/run/log/journal/165d8967d02c49e498e85a92a5b82b0b) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:37.771391 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:04:37.779059 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:04:37.779376 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:04:38.183717 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:38.186646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:04:38.188701 kernel: fuse: init (API version 7.41) Dec 16 13:04:38.191654 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:04:38.194704 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:04:38.198656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:04:38.201643 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:04:38.203268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:04:38.207734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:38.211982 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:04:38.212168 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:04:38.215427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:38.215711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:38.220787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:38.220942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:38.222889 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:04:38.223030 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:04:38.226702 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:38.226838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:38.229744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:38.232863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:38.238513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:04:38.243530 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:04:38.259259 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:38.265586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:04:38.273566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:04:38.278609 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:04:38.278639 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:38.282542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:04:38.287921 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:04:38.290348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:38.293459 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:04:38.301646 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:04:38.304212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:38.310871 kernel: ACPI: bus type drm_connector registered Dec 16 13:04:38.309533 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:04:38.312614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:38.314603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:38.318246 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:04:38.328950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:38.333229 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:38.333418 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:38.335912 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:04:38.338839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:38.340659 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:04:38.345813 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:04:38.349233 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:04:38.354370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:04:38.376627 systemd-journald[1257]: Time spent on flushing to /var/log/journal/165d8967d02c49e498e85a92a5b82b0b is 23.730ms for 995 entries. Dec 16 13:04:38.376627 systemd-journald[1257]: System Journal (/var/log/journal/165d8967d02c49e498e85a92a5b82b0b) is 8M, max 2.6G, 2.6G free. Dec 16 13:04:38.508346 systemd-journald[1257]: Received client request to flush runtime journal. Dec 16 13:04:38.508440 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:04:38.399910 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Dec 16 13:04:38.399923 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Dec 16 13:04:38.403400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:38.408000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:04:38.450739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:38.509670 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:04:38.521067 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:04:38.613988 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:04:38.616670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:38.634929 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:04:38.634947 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:04:38.637201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:38.780894 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:04:38.924520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:04:38.953252 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:04:38.958162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:38.965503 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:04:38.992418 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 16 13:04:39.232268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:39.237093 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:39.287680 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:04:39.303619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:04:39.360618 kernel: loop2: detected capacity change from 0 to 27936 Dec 16 13:04:39.387319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:04:39.430500 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:04:39.435502 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:04:39.439513 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:04:39.445516 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:04:39.448507 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:04:39.452658 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:04:39.454923 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:04:39.459770 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:39.469503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:39.568685 systemd-networkd[1337]: lo: Link UP Dec 16 13:04:39.568995 systemd-networkd[1337]: lo: Gained carrier Dec 16 13:04:39.570350 systemd-networkd[1337]: Enumeration completed Dec 16 13:04:39.570539 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:39.572242 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:39.572350 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:39.573826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:04:39.583799 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:39.581546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:04:39.597083 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:39.605619 kernel: hv_netvsc f8615163-0000-1000-2000-00224883cf90 eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:39.602714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:39.604441 systemd-networkd[1337]: enP30832s1: Link UP Dec 16 13:04:39.604537 systemd-networkd[1337]: eth0: Link UP Dec 16 13:04:39.604540 systemd-networkd[1337]: eth0: Gained carrier Dec 16 13:04:39.604561 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:39.609696 systemd-networkd[1337]: enP30832s1: Gained carrier Dec 16 13:04:39.616820 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.0.43/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:39.619830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:39.620028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:39.624652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:39.635734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:39.638567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:39.644358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:39.664179 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:04:39.741123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:39.748600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:04:39.794980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:04:39.808498 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 13:04:39.820503 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:04:39.868504 kernel: loop4: detected capacity change from 0 to 110984 Dec 16 13:04:39.877502 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:04:39.886500 kernel: loop6: detected capacity change from 0 to 27936 Dec 16 13:04:39.898551 kernel: loop7: detected capacity change from 0 to 229808 Dec 16 13:04:39.908716 (sd-merge)[1430]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 13:04:39.909098 (sd-merge)[1430]: Merged extensions into '/usr'. Dec 16 13:04:39.912024 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:04:39.912038 systemd[1]: Reloading... Dec 16 13:04:39.964515 zram_generator::config[1459]: No configuration found. Dec 16 13:04:40.170988 systemd[1]: Reloading finished in 258 ms. Dec 16 13:04:40.195652 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:04:40.197465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:40.213275 systemd[1]: Starting ensure-sysext.service... Dec 16 13:04:40.216606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:40.235410 systemd[1]: Reload requested from client PID 1521 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:04:40.235432 systemd[1]: Reloading... Dec 16 13:04:40.239099 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:04:40.239344 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:04:40.239674 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:04:40.239959 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:04:40.240953 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:04:40.241292 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Dec 16 13:04:40.241383 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Dec 16 13:04:40.262548 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:40.262561 systemd-tmpfiles[1522]: Skipping /boot Dec 16 13:04:40.271112 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:40.272555 systemd-tmpfiles[1522]: Skipping /boot Dec 16 13:04:40.306512 zram_generator::config[1550]: No configuration found. Dec 16 13:04:40.483915 systemd[1]: Reloading finished in 248 ms. Dec 16 13:04:40.499362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:40.524816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:40.529444 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:04:40.534592 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:04:40.539284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:40.544723 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:04:40.556704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:40.556945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:40.559322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:40.566574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:40.571677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:40.575128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:40.577508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:40.577638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:40.577804 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:04:40.579907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:40.589284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:40.589865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:40.594316 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:04:40.601550 systemd[1]: Finished ensure-sysext.service. Dec 16 13:04:40.610718 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:40.610896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:40.614020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:40.614181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:40.618360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:40.621028 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:40.621181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:40.623871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:40.644381 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:04:40.665310 systemd-resolved[1616]: Positive Trust Anchors: Dec 16 13:04:40.665557 systemd-resolved[1616]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:40.665602 systemd-resolved[1616]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:40.683370 systemd-resolved[1616]: Using system hostname 'ci-4459.2.2-a-efe6a0b1f4'. Dec 16 13:04:40.684456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:40.686005 systemd[1]: Reached target network.target - Network. Dec 16 13:04:40.687323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:40.730584 augenrules[1649]: No rules Dec 16 13:04:40.731539 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:40.731718 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:40.842654 systemd-networkd[1337]: eth0: Gained IPv6LL Dec 16 13:04:40.844853 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:04:40.846953 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:04:41.290611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:04:41.295780 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:04:43.838552 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:04:43.855616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:04:43.863063 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:04:43.873551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:04:43.876706 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:43.879614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:04:43.881368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:04:43.884530 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:04:43.886320 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:04:43.889582 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:04:43.891381 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:04:43.893285 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:04:43.893335 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:43.894421 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:43.896570 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:04:43.900449 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:04:43.905049 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:04:43.907658 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:04:43.910532 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:04:43.923966 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:04:43.928058 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:04:43.930276 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:04:43.934236 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:43.937536 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:43.940579 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:43.940605 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:43.942490 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:04:43.946395 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:04:43.952602 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:04:43.959766 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:04:43.964170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:04:43.970589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:04:43.974654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:04:43.976728 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:04:43.978577 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:04:43.980730 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:04:43.984026 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:04:43.986055 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:04:43.991050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:44.003662 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:04:44.009204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:04:44.016554 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:04:44.017998 extend-filesystems[1671]: Found /dev/nvme0n1p6 Dec 16 13:04:44.020428 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:04:44.025840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:04:44.032822 KVP[1673]: KVP starting; pid is:1673 Dec 16 13:04:44.033655 jq[1670]: false Dec 16 13:04:44.034696 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:04:44.039312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:04:44.041092 extend-filesystems[1671]: Found /dev/nvme0n1p9 Dec 16 13:04:44.039760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:04:44.045649 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing passwd entry cache Dec 16 13:04:44.041925 chronyd[1662]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:04:44.043410 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:04:44.044817 oslogin_cache_refresh[1672]: Refreshing passwd entry cache Dec 16 13:04:44.047326 KVP[1673]: KVP LIC Version: 3.1 Dec 16 13:04:44.047529 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:04:44.047562 extend-filesystems[1671]: Checking size of /dev/nvme0n1p9 Dec 16 13:04:44.055098 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:04:44.065886 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:04:44.070853 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:04:44.071031 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:04:44.074545 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:04:44.074733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:04:44.082325 chronyd[1662]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:04:44.082510 chronyd[1662]: Loaded seccomp filter (level 2) Dec 16 13:04:44.083387 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:04:44.095147 jq[1688]: true Dec 16 13:04:44.106076 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:04:44.106284 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:04:44.114502 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting users, quitting Dec 16 13:04:44.114502 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:44.114502 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing group entry cache Dec 16 13:04:44.112836 oslogin_cache_refresh[1672]: Failure getting users, quitting Dec 16 13:04:44.112855 oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:44.112898 oslogin_cache_refresh[1672]: Refreshing group entry cache Dec 16 13:04:44.123527 (ntainerd)[1705]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:04:44.125540 extend-filesystems[1671]: Old size kept for /dev/nvme0n1p9 Dec 16 13:04:44.126010 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:04:44.126263 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:04:44.128233 jq[1712]: true Dec 16 13:04:44.130740 update_engine[1685]: I20251216 13:04:44.130293 1685 main.cc:92] Flatcar Update Engine starting Dec 16 13:04:44.133845 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting groups, quitting Dec 16 13:04:44.133843 oslogin_cache_refresh[1672]: Failure getting groups, quitting Dec 16 13:04:44.133941 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:44.133854 oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:44.139083 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:04:44.139312 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:04:44.168811 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:04:44.223942 systemd-logind[1684]: New seat seat0. Dec 16 13:04:44.232730 systemd-logind[1684]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:04:44.232888 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:04:44.240878 tar[1695]: linux-amd64/LICENSE Dec 16 13:04:44.240878 tar[1695]: linux-amd64/helm Dec 16 13:04:44.271903 bash[1742]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:04:44.272942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:04:44.276952 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:04:44.295321 dbus-daemon[1665]: [system] SELinux support is enabled Dec 16 13:04:44.296520 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:04:44.301731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:04:44.302459 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:04:44.305851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:04:44.305870 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:04:44.314180 dbus-daemon[1665]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:04:44.316295 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:04:44.319307 update_engine[1685]: I20251216 13:04:44.318802 1685 update_check_scheduler.cc:74] Next update check in 4m12s Dec 16 13:04:44.343622 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:04:44.413428 coreos-metadata[1664]: Dec 16 13:04:44.411 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:04:44.419728 coreos-metadata[1664]: Dec 16 13:04:44.417 INFO Fetch successful Dec 16 13:04:44.419728 coreos-metadata[1664]: Dec 16 13:04:44.417 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:04:44.457317 coreos-metadata[1664]: Dec 16 13:04:44.457 INFO Fetch successful Dec 16 13:04:44.457754 coreos-metadata[1664]: Dec 16 13:04:44.457 INFO Fetching http://168.63.129.16/machine/5f8d8d06-1782-4b8d-a972-080c6ac2b4da/1fef7e4a%2D788e%2D4fbb%2Dbcfe%2D70179f365bc9.%5Fci%2D4459.2.2%2Da%2Defe6a0b1f4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:04:44.459901 coreos-metadata[1664]: Dec 16 13:04:44.459 INFO Fetch successful Dec 16 13:04:44.459901 coreos-metadata[1664]: Dec 16 13:04:44.459 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:04:44.470187 coreos-metadata[1664]: Dec 16 13:04:44.468 INFO Fetch successful Dec 16 13:04:44.533088 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:04:44.537118 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:04:44.541074 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:04:44.584106 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:04:44.589063 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:04:44.596658 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:04:44.629166 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:04:44.629417 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:04:44.634978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:04:44.655550 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:04:44.659076 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:04:44.667385 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:04:44.674086 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:04:44.676239 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:04:44.685194 locksmithd[1768]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:04:44.789588 tar[1695]: linux-amd64/README.md Dec 16 13:04:44.809475 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:04:45.371889 containerd[1705]: time="2025-12-16T13:04:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:04:45.372842 containerd[1705]: time="2025-12-16T13:04:45.372804206Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:04:45.379617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:45.385908 containerd[1705]: time="2025-12-16T13:04:45.385877685Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.736µs" Dec 16 13:04:45.385908 containerd[1705]: time="2025-12-16T13:04:45.385901932Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:04:45.385997 containerd[1705]: time="2025-12-16T13:04:45.385919518Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:04:45.386053 containerd[1705]: time="2025-12-16T13:04:45.386039652Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:04:45.386075 containerd[1705]: time="2025-12-16T13:04:45.386053303Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:04:45.386096 containerd[1705]: time="2025-12-16T13:04:45.386075841Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386134 containerd[1705]: time="2025-12-16T13:04:45.386121984Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386157 containerd[1705]: time="2025-12-16T13:04:45.386132692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386323 containerd[1705]: time="2025-12-16T13:04:45.386304782Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386323 containerd[1705]: time="2025-12-16T13:04:45.386318551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386368 containerd[1705]: time="2025-12-16T13:04:45.386329138Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386368 containerd[1705]: time="2025-12-16T13:04:45.386336926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386411 containerd[1705]: time="2025-12-16T13:04:45.386396547Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386724 containerd[1705]: time="2025-12-16T13:04:45.386693582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386756 containerd[1705]: time="2025-12-16T13:04:45.386744780Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:45.386776 containerd[1705]: time="2025-12-16T13:04:45.386755757Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:04:45.386795 containerd[1705]: time="2025-12-16T13:04:45.386787153Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:04:45.387186 containerd[1705]: time="2025-12-16T13:04:45.387168864Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:04:45.387241 containerd[1705]: time="2025-12-16T13:04:45.387229559Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:04:45.388727 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:45.402258 containerd[1705]: time="2025-12-16T13:04:45.402227118Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:04:45.402325 containerd[1705]: time="2025-12-16T13:04:45.402274674Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:04:45.402325 containerd[1705]: time="2025-12-16T13:04:45.402290892Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:04:45.402325 containerd[1705]: time="2025-12-16T13:04:45.402304470Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:04:45.402325 containerd[1705]: time="2025-12-16T13:04:45.402318303Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:04:45.402426 containerd[1705]: time="2025-12-16T13:04:45.402328995Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:04:45.402426 containerd[1705]: time="2025-12-16T13:04:45.402345157Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402358745Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402516480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402532522Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402543019Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402556489Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402669425Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402686800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402726914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402739861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402759376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402770489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402781981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402792308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402803936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:04:45.403282 containerd[1705]: time="2025-12-16T13:04:45.402815347Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:04:45.403576 containerd[1705]: time="2025-12-16T13:04:45.402826237Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:04:45.403576 containerd[1705]: time="2025-12-16T13:04:45.402871445Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:04:45.403576 containerd[1705]: time="2025-12-16T13:04:45.402884073Z" level=info msg="Start snapshots syncer" Dec 16 13:04:45.403576 containerd[1705]: time="2025-12-16T13:04:45.402902279Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:04:45.403656 containerd[1705]: time="2025-12-16T13:04:45.403152511Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:04:45.403656 containerd[1705]: time="2025-12-16T13:04:45.403200379Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:04:45.403656 containerd[1705]: time="2025-12-16T13:04:45.403243553Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:04:45.403927 containerd[1705]: time="2025-12-16T13:04:45.403908157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:04:45.403978 containerd[1705]: time="2025-12-16T13:04:45.403968653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:04:45.404016 containerd[1705]: time="2025-12-16T13:04:45.404009173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:04:45.404058 containerd[1705]: time="2025-12-16T13:04:45.404049436Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:04:45.404104 containerd[1705]: time="2025-12-16T13:04:45.404095953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:04:45.404143 containerd[1705]: time="2025-12-16T13:04:45.404136030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:04:45.404181 containerd[1705]: time="2025-12-16T13:04:45.404174348Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:04:45.404232 containerd[1705]: time="2025-12-16T13:04:45.404224509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:04:45.404268 containerd[1705]: time="2025-12-16T13:04:45.404261442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:04:45.404310 containerd[1705]: time="2025-12-16T13:04:45.404302405Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:04:45.404363 containerd[1705]: time="2025-12-16T13:04:45.404355398Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:45.404438 containerd[1705]: time="2025-12-16T13:04:45.404427612Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:45.404536 containerd[1705]: time="2025-12-16T13:04:45.404524374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:45.404578 containerd[1705]: time="2025-12-16T13:04:45.404568780Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:45.404619 containerd[1705]: time="2025-12-16T13:04:45.404611708Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:04:45.404662 containerd[1705]: time="2025-12-16T13:04:45.404654027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:04:45.404707 containerd[1705]: time="2025-12-16T13:04:45.404699245Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:04:45.404748 containerd[1705]: time="2025-12-16T13:04:45.404741825Z" level=info msg="runtime interface created" Dec 16 13:04:45.404782 containerd[1705]: time="2025-12-16T13:04:45.404775396Z" level=info msg="created NRI interface" Dec 16 13:04:45.404817 containerd[1705]: time="2025-12-16T13:04:45.404810007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:04:45.404855 containerd[1705]: time="2025-12-16T13:04:45.404848821Z" level=info msg="Connect containerd service" Dec 16 13:04:45.404905 containerd[1705]: time="2025-12-16T13:04:45.404898943Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:04:45.405770 containerd[1705]: time="2025-12-16T13:04:45.405745270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:04:45.943766 kubelet[1819]: E1216 13:04:45.943713 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:45.946502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:45.946713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:45.948207 systemd[1]: kubelet.service: Consumed 957ms CPU time, 265.8M memory peak. Dec 16 13:04:46.014874 containerd[1705]: time="2025-12-16T13:04:46.014808655Z" level=info msg="Start subscribing containerd event" Dec 16 13:04:46.015048 containerd[1705]: time="2025-12-16T13:04:46.014997918Z" level=info msg="Start recovering state" Dec 16 13:04:46.015095 containerd[1705]: time="2025-12-16T13:04:46.015057350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:04:46.015128 containerd[1705]: time="2025-12-16T13:04:46.015097381Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:04:46.015261 containerd[1705]: time="2025-12-16T13:04:46.015249965Z" level=info msg="Start event monitor" Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015315203Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015327619Z" level=info msg="Start streaming server" Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015337463Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015345417Z" level=info msg="runtime interface starting up..." Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015351919Z" level=info msg="starting plugins..." Dec 16 13:04:46.015389 containerd[1705]: time="2025-12-16T13:04:46.015365239Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:04:46.015699 containerd[1705]: time="2025-12-16T13:04:46.015645427Z" level=info msg="containerd successfully booted in 0.644669s" Dec 16 13:04:46.015820 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:04:46.019922 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:04:46.022697 systemd[1]: Startup finished in 3.203s (kernel) + 10.487s (initrd) + 10.778s (userspace) = 24.469s. Dec 16 13:04:46.299111 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:04:46.300562 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:04:46.315286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:04:46.316072 systemd-logind[1684]: New session 2 of user core. Dec 16 13:04:46.317554 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:04:46.320760 systemd-logind[1684]: New session 1 of user core. Dec 16 13:04:46.366656 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:04:46.368802 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:04:46.381915 (systemd)[1845]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:04:46.384248 systemd-logind[1684]: New session c1 of user core. Dec 16 13:04:46.597605 waagent[1798]: 2025-12-16T13:04:46.597460Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:04:46.598217 waagent[1798]: 2025-12-16T13:04:46.598176Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 13:04:46.598451 waagent[1798]: 2025-12-16T13:04:46.598426Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:04:46.599014 waagent[1798]: 2025-12-16T13:04:46.598913Z INFO Daemon Daemon Run daemon Dec 16 13:04:46.599250 waagent[1798]: 2025-12-16T13:04:46.599226Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 13:04:46.599567 waagent[1798]: 2025-12-16T13:04:46.599546Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:04:46.600043 waagent[1798]: 2025-12-16T13:04:46.600023Z INFO Daemon Daemon Activate resource disk Dec 16 13:04:46.600596 waagent[1798]: 2025-12-16T13:04:46.600574Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:04:46.602558 waagent[1798]: 2025-12-16T13:04:46.602518Z INFO Daemon Daemon Found device: None Dec 16 13:04:46.602924 waagent[1798]: 2025-12-16T13:04:46.602899Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:04:46.603373 waagent[1798]: 2025-12-16T13:04:46.603353Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:04:46.604173 waagent[1798]: 2025-12-16T13:04:46.603972Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:04:46.604913 waagent[1798]: 2025-12-16T13:04:46.604882Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:04:46.611728 waagent[1798]: 2025-12-16T13:04:46.611688Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:04:46.612565 waagent[1798]: 2025-12-16T13:04:46.612534Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:04:46.612860 waagent[1798]: 2025-12-16T13:04:46.612837Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:04:46.613112 waagent[1798]: 2025-12-16T13:04:46.613096Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:04:46.631461 systemd[1845]: Queued start job for default target default.target. Dec 16 13:04:46.638233 systemd[1845]: Created slice app.slice - User Application Slice. Dec 16 13:04:46.638261 systemd[1845]: Reached target paths.target - Paths. Dec 16 13:04:46.638292 systemd[1845]: Reached target timers.target - Timers. Dec 16 13:04:46.640336 systemd[1845]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:04:46.647964 systemd[1845]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:04:46.648205 systemd[1845]: Reached target sockets.target - Sockets. Dec 16 13:04:46.648288 systemd[1845]: Reached target basic.target - Basic System. Dec 16 13:04:46.648386 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:04:46.648584 systemd[1845]: Reached target default.target - Main User Target. Dec 16 13:04:46.648614 systemd[1845]: Startup finished in 258ms. Dec 16 13:04:46.649567 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:04:46.650205 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:04:46.669954 waagent[1798]: 2025-12-16T13:04:46.669894Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:04:46.695364 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:04:46.697286 waagent[1798]: 2025-12-16T13:04:46.697241Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:04:46.697613 waagent[1798]: 2025-12-16T13:04:46.697404Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:04:46.697697 waagent[1798]: 2025-12-16T13:04:46.697671Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:04:46.697761 waagent[1798]: 2025-12-16T13:04:46.697743Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:04:46.697905 waagent[1798]: 2025-12-16T13:04:46.697885Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:04:46.697969 waagent[1798]: 2025-12-16T13:04:46.697951Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:04:46.713843 waagent[1798]: 2025-12-16T13:04:46.713817Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:04:46.714283 waagent[1798]: 2025-12-16T13:04:46.714093Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:04:46.714283 waagent[1798]: 2025-12-16T13:04:46.714286Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:04:46.783512 waagent[1798]: 2025-12-16T13:04:46.783429Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:04:46.783806 waagent[1798]: 2025-12-16T13:04:46.783772Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:04:46.788817 waagent[1798]: 2025-12-16T13:04:46.788773Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:04:46.808090 waagent[1798]: 2025-12-16T13:04:46.808059Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:04:46.812073 waagent[1798]: 2025-12-16T13:04:46.808620Z INFO Daemon Dec 16 13:04:46.812073 waagent[1798]: 2025-12-16T13:04:46.808901Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: df3fdb0b-465a-499f-8ebe-3cd5a72fa872 eTag: 5194983581265114419 source: Fabric] Dec 16 13:04:46.812073 waagent[1798]: 2025-12-16T13:04:46.809500Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:04:46.812073 waagent[1798]: 2025-12-16T13:04:46.809833Z INFO Daemon Dec 16 13:04:46.812073 waagent[1798]: 2025-12-16T13:04:46.810271Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:04:46.817961 waagent[1798]: 2025-12-16T13:04:46.816639Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:04:46.897318 waagent[1798]: 2025-12-16T13:04:46.897268Z INFO Daemon Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:04:46.900034 waagent[1798]: 2025-12-16T13:04:46.899999Z INFO Daemon Fetch goal state completed Dec 16 13:04:46.906702 waagent[1798]: 2025-12-16T13:04:46.906646Z INFO Daemon Daemon Starting provisioning Dec 16 13:04:46.906966 waagent[1798]: 2025-12-16T13:04:46.906845Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:04:46.907009 waagent[1798]: 2025-12-16T13:04:46.906969Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-efe6a0b1f4] Dec 16 13:04:46.909646 waagent[1798]: 2025-12-16T13:04:46.909607Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-efe6a0b1f4] Dec 16 13:04:46.909856 waagent[1798]: 2025-12-16T13:04:46.909829Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:04:46.910026 waagent[1798]: 2025-12-16T13:04:46.910004Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:04:46.918735 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:46.918743 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:46.918767 systemd-networkd[1337]: eth0: DHCP lease lost Dec 16 13:04:46.920076 waagent[1798]: 2025-12-16T13:04:46.919859Z INFO Daemon Daemon Create user account if not exists Dec 16 13:04:46.920673 waagent[1798]: 2025-12-16T13:04:46.920635Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:04:46.920797 waagent[1798]: 2025-12-16T13:04:46.920773Z INFO Daemon Daemon Configure sudoer Dec 16 13:04:46.927310 waagent[1798]: 2025-12-16T13:04:46.927262Z INFO Daemon Daemon Configure sshd Dec 16 13:04:46.930845 waagent[1798]: 2025-12-16T13:04:46.930805Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:04:46.931079 waagent[1798]: 2025-12-16T13:04:46.931054Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:04:46.940533 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.0.43/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:48.006386 waagent[1798]: 2025-12-16T13:04:48.006345Z INFO Daemon Daemon Provisioning complete Dec 16 13:04:48.020127 waagent[1798]: 2025-12-16T13:04:48.020095Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:04:48.022475 waagent[1798]: 2025-12-16T13:04:48.020301Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:04:48.022475 waagent[1798]: 2025-12-16T13:04:48.020570Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:04:48.124313 waagent[1896]: 2025-12-16T13:04:48.124226Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:04:48.124633 waagent[1896]: 2025-12-16T13:04:48.124343Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 13:04:48.124633 waagent[1896]: 2025-12-16T13:04:48.124382Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:04:48.124633 waagent[1896]: 2025-12-16T13:04:48.124419Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:04:48.161576 waagent[1896]: 2025-12-16T13:04:48.161503Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:04:48.161723 waagent[1896]: 2025-12-16T13:04:48.161693Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:48.161786 waagent[1896]: 2025-12-16T13:04:48.161754Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:48.167063 waagent[1896]: 2025-12-16T13:04:48.167000Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:04:48.173221 waagent[1896]: 2025-12-16T13:04:48.173184Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:04:48.173613 waagent[1896]: 2025-12-16T13:04:48.173581Z INFO ExtHandler Dec 16 13:04:48.173657 waagent[1896]: 2025-12-16T13:04:48.173639Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 36234f67-9bcb-40eb-84dc-97fdcbca8733 eTag: 5194983581265114419 source: Fabric] Dec 16 13:04:48.173878 waagent[1896]: 2025-12-16T13:04:48.173850Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:04:48.174227 waagent[1896]: 2025-12-16T13:04:48.174194Z INFO ExtHandler Dec 16 13:04:48.174265 waagent[1896]: 2025-12-16T13:04:48.174238Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:04:48.177669 waagent[1896]: 2025-12-16T13:04:48.177638Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:04:48.252996 waagent[1896]: 2025-12-16T13:04:48.252942Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:04:48.253340 waagent[1896]: 2025-12-16T13:04:48.253311Z INFO ExtHandler Fetch goal state completed Dec 16 13:04:48.270948 waagent[1896]: 2025-12-16T13:04:48.270861Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 13:04:48.275150 waagent[1896]: 2025-12-16T13:04:48.275099Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1896 Dec 16 13:04:48.275259 waagent[1896]: 2025-12-16T13:04:48.275233Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:04:48.275514 waagent[1896]: 2025-12-16T13:04:48.275470Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:04:48.276589 waagent[1896]: 2025-12-16T13:04:48.276559Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:04:48.276877 waagent[1896]: 2025-12-16T13:04:48.276852Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:04:48.276993 waagent[1896]: 2025-12-16T13:04:48.276973Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:04:48.277400 waagent[1896]: 2025-12-16T13:04:48.277376Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:04:48.309593 waagent[1896]: 2025-12-16T13:04:48.309560Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:04:48.309739 waagent[1896]: 2025-12-16T13:04:48.309714Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:04:48.315445 waagent[1896]: 2025-12-16T13:04:48.315088Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:04:48.320132 systemd[1]: Reload requested from client PID 1911 ('systemctl') (unit waagent.service)... Dec 16 13:04:48.320167 systemd[1]: Reloading... Dec 16 13:04:48.394524 zram_generator::config[1950]: No configuration found. Dec 16 13:04:48.574224 systemd[1]: Reloading finished in 253 ms. Dec 16 13:04:48.596735 waagent[1896]: 2025-12-16T13:04:48.595385Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:04:48.596735 waagent[1896]: 2025-12-16T13:04:48.595556Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:04:48.727939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:04:49.269373 waagent[1896]: 2025-12-16T13:04:49.269298Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:04:49.269712 waagent[1896]: 2025-12-16T13:04:49.269679Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:04:49.270430 waagent[1896]: 2025-12-16T13:04:49.270393Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:04:49.270620 waagent[1896]: 2025-12-16T13:04:49.270591Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:49.270689 waagent[1896]: 2025-12-16T13:04:49.270663Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:49.270887 waagent[1896]: 2025-12-16T13:04:49.270866Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:04:49.271174 waagent[1896]: 2025-12-16T13:04:49.271123Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:04:49.271241 waagent[1896]: 2025-12-16T13:04:49.271180Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:04:49.271345 waagent[1896]: 2025-12-16T13:04:49.271323Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:04:49.271345 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:04:49.271345 waagent[1896]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:04:49.271345 waagent[1896]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:04:49.271345 waagent[1896]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:49.271345 waagent[1896]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:49.271345 waagent[1896]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:04:49.271826 waagent[1896]: 2025-12-16T13:04:49.271651Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:04:49.271826 waagent[1896]: 2025-12-16T13:04:49.271790Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:04:49.272057 waagent[1896]: 2025-12-16T13:04:49.272013Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:04:49.272101 waagent[1896]: 2025-12-16T13:04:49.272075Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:04:49.272341 waagent[1896]: 2025-12-16T13:04:49.272306Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:04:49.272377 waagent[1896]: 2025-12-16T13:04:49.272356Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:04:49.272432 waagent[1896]: 2025-12-16T13:04:49.272412Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:04:49.272787 waagent[1896]: 2025-12-16T13:04:49.272763Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:04:49.273259 waagent[1896]: 2025-12-16T13:04:49.273178Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:04:49.290748 waagent[1896]: 2025-12-16T13:04:49.290708Z INFO ExtHandler ExtHandler Dec 16 13:04:49.290825 waagent[1896]: 2025-12-16T13:04:49.290768Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a265c00e-2205-4d18-9321-eeb70515a12f correlation 58948a11-560f-4e43-ab6c-7b3c6be378fe created: 2025-12-16T13:03:51.796814Z] Dec 16 13:04:49.291063 waagent[1896]: 2025-12-16T13:04:49.291033Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:04:49.291445 waagent[1896]: 2025-12-16T13:04:49.291419Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:04:49.328194 waagent[1896]: 2025-12-16T13:04:49.328146Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:04:49.328194 waagent[1896]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:04:49.328509 waagent[1896]: 2025-12-16T13:04:49.328465Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 22C42939-5F86-4F20-8174-E408CBA55FA1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:04:49.512371 waagent[1896]: 2025-12-16T13:04:49.512293Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:04:49.512371 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:04:49.512371 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:04:49.512371 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:83:cf:90 brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 16 13:04:49.512371 waagent[1896]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:83:cf:90 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:04:49.512371 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:04:49.512371 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:04:49.512371 waagent[1896]: 2: eth0 inet 10.200.0.43/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:04:49.512371 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:04:49.512371 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:04:49.512371 waagent[1896]: 2: eth0 inet6 fe80::222:48ff:fe83:cf90/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:04:49.919029 waagent[1896]: 2025-12-16T13:04:49.918970Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:04:49.919029 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:49.919029 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.919029 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:49.919029 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.919029 waagent[1896]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Dec 16 13:04:49.919029 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.919029 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:04:49.919029 waagent[1896]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:04:49.919029 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:04:49.921809 waagent[1896]: 2025-12-16T13:04:49.921760Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:04:49.921809 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:49.921809 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.921809 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:04:49.921809 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.921809 waagent[1896]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Dec 16 13:04:49.921809 waagent[1896]: pkts bytes target prot opt in out source destination Dec 16 13:04:49.921809 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:04:49.921809 waagent[1896]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:04:49.921809 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:04:56.108201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:56.109684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:00.983551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:00.989691 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:01.028359 kubelet[2048]: E1216 13:05:01.028318 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:01.031798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:01.031936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:01.032250 systemd[1]: kubelet.service: Consumed 136ms CPU time, 108.5M memory peak. Dec 16 13:05:07.866238 chronyd[1662]: Selected source PHC0 Dec 16 13:05:08.538603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:05:08.539602 systemd[1]: Started sshd@0-10.200.0.43:22-10.200.16.10:50874.service - OpenSSH per-connection server daemon (10.200.16.10:50874). Dec 16 13:05:09.165808 sshd[2056]: Accepted publickey for core from 10.200.16.10 port 50874 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:09.166883 sshd-session[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:09.171077 systemd-logind[1684]: New session 3 of user core. Dec 16 13:05:09.176620 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:05:09.651834 systemd[1]: Started sshd@1-10.200.0.43:22-10.200.16.10:50880.service - OpenSSH per-connection server daemon (10.200.16.10:50880). Dec 16 13:05:10.217285 sshd[2062]: Accepted publickey for core from 10.200.16.10 port 50880 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:10.218389 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:10.221996 systemd-logind[1684]: New session 4 of user core. Dec 16 13:05:10.228613 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:05:10.606388 sshd[2065]: Connection closed by 10.200.16.10 port 50880 Dec 16 13:05:10.608565 sshd-session[2062]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:10.611632 systemd[1]: sshd@1-10.200.0.43:22-10.200.16.10:50880.service: Deactivated successfully. Dec 16 13:05:10.613329 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:05:10.614535 systemd-logind[1684]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:05:10.615248 systemd-logind[1684]: Removed session 4. Dec 16 13:05:10.704113 systemd[1]: Started sshd@2-10.200.0.43:22-10.200.16.10:60286.service - OpenSSH per-connection server daemon (10.200.16.10:60286). Dec 16 13:05:11.108125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:05:11.109603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:11.276434 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 60286 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:11.277563 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:11.282076 systemd-logind[1684]: New session 5 of user core. Dec 16 13:05:11.287627 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:05:11.666266 sshd[2077]: Connection closed by 10.200.16.10 port 60286 Dec 16 13:05:11.666831 sshd-session[2071]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:11.669988 systemd[1]: sshd@2-10.200.0.43:22-10.200.16.10:60286.service: Deactivated successfully. Dec 16 13:05:11.671446 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:05:11.672167 systemd-logind[1684]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:05:11.673272 systemd-logind[1684]: Removed session 5. Dec 16 13:05:11.767995 systemd[1]: Started sshd@3-10.200.0.43:22-10.200.16.10:60300.service - OpenSSH per-connection server daemon (10.200.16.10:60300). Dec 16 13:05:12.320676 sshd[2083]: Accepted publickey for core from 10.200.16.10 port 60300 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:12.321793 sshd-session[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:12.326070 systemd-logind[1684]: New session 6 of user core. Dec 16 13:05:12.328631 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:05:12.711163 sshd[2086]: Connection closed by 10.200.16.10 port 60300 Dec 16 13:05:12.711760 sshd-session[2083]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:12.714606 systemd[1]: sshd@3-10.200.0.43:22-10.200.16.10:60300.service: Deactivated successfully. Dec 16 13:05:12.716158 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:05:12.717977 systemd-logind[1684]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:05:12.718783 systemd-logind[1684]: Removed session 6. Dec 16 13:05:12.821169 systemd[1]: Started sshd@4-10.200.0.43:22-10.200.16.10:60310.service - OpenSSH per-connection server daemon (10.200.16.10:60310). Dec 16 13:05:13.375932 sshd[2092]: Accepted publickey for core from 10.200.16.10 port 60310 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:13.377031 sshd-session[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:13.381262 systemd-logind[1684]: New session 7 of user core. Dec 16 13:05:13.387633 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:05:14.315091 sudo[2096]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:05:14.315315 sudo[2096]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:14.368322 sudo[2096]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:14.454724 sshd[2095]: Connection closed by 10.200.16.10 port 60310 Dec 16 13:05:14.455373 sshd-session[2092]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:14.459002 systemd[1]: sshd@4-10.200.0.43:22-10.200.16.10:60310.service: Deactivated successfully. Dec 16 13:05:14.460451 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:05:14.461185 systemd-logind[1684]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:05:14.462528 systemd-logind[1684]: Removed session 7. Dec 16 13:05:14.552318 systemd[1]: Started sshd@5-10.200.0.43:22-10.200.16.10:60320.service - OpenSSH per-connection server daemon (10.200.16.10:60320). Dec 16 13:05:15.044605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:15.061709 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:15.097171 kubelet[2110]: E1216 13:05:15.097134 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:15.099018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:15.099156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:15.099459 systemd[1]: kubelet.service: Consumed 129ms CPU time, 111.1M memory peak. Dec 16 13:05:15.107513 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 60320 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:15.108594 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:15.112760 systemd-logind[1684]: New session 8 of user core. Dec 16 13:05:15.118627 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:05:15.412181 sudo[2119]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:05:15.412399 sudo[2119]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:15.686550 sudo[2119]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:15.691105 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:05:15.691332 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:15.699208 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:05:15.731009 augenrules[2141]: No rules Dec 16 13:05:15.732030 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:05:15.732228 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:05:15.733113 sudo[2118]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:15.820505 sshd[2117]: Connection closed by 10.200.16.10 port 60320 Dec 16 13:05:15.820975 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:15.823716 systemd[1]: sshd@5-10.200.0.43:22-10.200.16.10:60320.service: Deactivated successfully. Dec 16 13:05:15.825192 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:05:15.826397 systemd-logind[1684]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:05:15.827677 systemd-logind[1684]: Removed session 8. Dec 16 13:05:15.928969 systemd[1]: Started sshd@6-10.200.0.43:22-10.200.16.10:60322.service - OpenSSH per-connection server daemon (10.200.16.10:60322). Dec 16 13:05:16.478739 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 60322 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:16.479806 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:16.484013 systemd-logind[1684]: New session 9 of user core. Dec 16 13:05:16.498612 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:05:16.783158 sudo[2154]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:05:16.783376 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:20.565101 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:05:20.581825 (dockerd)[2171]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:05:22.184095 dockerd[2171]: time="2025-12-16T13:05:22.184042495Z" level=info msg="Starting up" Dec 16 13:05:22.185872 dockerd[2171]: time="2025-12-16T13:05:22.185838686Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:05:22.195415 dockerd[2171]: time="2025-12-16T13:05:22.195381822Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:05:22.286003 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1821796783-merged.mount: Deactivated successfully. Dec 16 13:05:23.041594 systemd[1]: var-lib-docker-metacopy\x2dcheck3108693703-merged.mount: Deactivated successfully. Dec 16 13:05:23.750666 dockerd[2171]: time="2025-12-16T13:05:23.750619128Z" level=info msg="Loading containers: start." Dec 16 13:05:23.815564 kernel: Initializing XFRM netlink socket Dec 16 13:05:24.162795 systemd-networkd[1337]: docker0: Link UP Dec 16 13:05:24.176329 dockerd[2171]: time="2025-12-16T13:05:24.176283787Z" level=info msg="Loading containers: done." Dec 16 13:05:24.189666 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3330273774-merged.mount: Deactivated successfully. Dec 16 13:05:24.199344 dockerd[2171]: time="2025-12-16T13:05:24.199305752Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:05:24.199431 dockerd[2171]: time="2025-12-16T13:05:24.199387582Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:05:24.199474 dockerd[2171]: time="2025-12-16T13:05:24.199463027Z" level=info msg="Initializing buildkit" Dec 16 13:05:24.249964 dockerd[2171]: time="2025-12-16T13:05:24.249926627Z" level=info msg="Completed buildkit initialization" Dec 16 13:05:24.255884 dockerd[2171]: time="2025-12-16T13:05:24.255855012Z" level=info msg="Daemon has completed initialization" Dec 16 13:05:24.256442 dockerd[2171]: time="2025-12-16T13:05:24.255900097Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:05:24.256134 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:05:25.108265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:05:25.109884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:25.303331 containerd[1705]: time="2025-12-16T13:05:25.303291287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:05:25.618597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:25.624694 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:25.660297 kubelet[2385]: E1216 13:05:25.660260 2385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:25.662100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:25.662244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:25.662623 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Dec 16 13:05:26.231547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158513010.mount: Deactivated successfully. Dec 16 13:05:27.522050 containerd[1705]: time="2025-12-16T13:05:27.522001104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.527549 containerd[1705]: time="2025-12-16T13:05:27.527514435Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114150" Dec 16 13:05:27.530655 containerd[1705]: time="2025-12-16T13:05:27.530622666Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.535351 containerd[1705]: time="2025-12-16T13:05:27.535111259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.535807 containerd[1705]: time="2025-12-16T13:05:27.535783614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.232451335s" Dec 16 13:05:27.535850 containerd[1705]: time="2025-12-16T13:05:27.535820654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:05:27.536385 containerd[1705]: time="2025-12-16T13:05:27.536363242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:05:27.543501 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:05:28.739490 containerd[1705]: time="2025-12-16T13:05:28.739426996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.742642 containerd[1705]: time="2025-12-16T13:05:28.742601599Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016713" Dec 16 13:05:28.745366 containerd[1705]: time="2025-12-16T13:05:28.745325571Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.749053 containerd[1705]: time="2025-12-16T13:05:28.749007043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.750075 containerd[1705]: time="2025-12-16T13:05:28.749649788Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.213255747s" Dec 16 13:05:28.750075 containerd[1705]: time="2025-12-16T13:05:28.749682602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:05:28.750301 containerd[1705]: time="2025-12-16T13:05:28.750274173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:05:29.558661 update_engine[1685]: I20251216 13:05:29.558588 1685 update_attempter.cc:509] Updating boot flags... Dec 16 13:05:29.838816 containerd[1705]: time="2025-12-16T13:05:29.838726142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.841970 containerd[1705]: time="2025-12-16T13:05:29.841939227Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158034" Dec 16 13:05:29.844566 containerd[1705]: time="2025-12-16T13:05:29.844523165Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.848200 containerd[1705]: time="2025-12-16T13:05:29.847945176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.848666 containerd[1705]: time="2025-12-16T13:05:29.848642143Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.0983384s" Dec 16 13:05:29.848707 containerd[1705]: time="2025-12-16T13:05:29.848675529Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:05:29.849296 containerd[1705]: time="2025-12-16T13:05:29.849266858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:05:30.721095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135429927.mount: Deactivated successfully. Dec 16 13:05:31.104603 containerd[1705]: time="2025-12-16T13:05:31.104470646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:31.106809 containerd[1705]: time="2025-12-16T13:05:31.106776287Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31929990" Dec 16 13:05:31.109540 containerd[1705]: time="2025-12-16T13:05:31.109466165Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:31.112707 containerd[1705]: time="2025-12-16T13:05:31.112675342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:31.113277 containerd[1705]: time="2025-12-16T13:05:31.112961105Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.263667169s" Dec 16 13:05:31.113277 containerd[1705]: time="2025-12-16T13:05:31.112991039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:05:31.113516 containerd[1705]: time="2025-12-16T13:05:31.113500667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:05:31.599245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574235299.mount: Deactivated successfully. Dec 16 13:05:32.617655 containerd[1705]: time="2025-12-16T13:05:32.617606824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:32.621869 containerd[1705]: time="2025-12-16T13:05:32.621619784Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Dec 16 13:05:32.624421 containerd[1705]: time="2025-12-16T13:05:32.624394551Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:32.629320 containerd[1705]: time="2025-12-16T13:05:32.629287364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:32.630013 containerd[1705]: time="2025-12-16T13:05:32.629988665Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.516411417s" Dec 16 13:05:32.630090 containerd[1705]: time="2025-12-16T13:05:32.630078759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:05:32.630739 containerd[1705]: time="2025-12-16T13:05:32.630706041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:05:33.058426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172103099.mount: Deactivated successfully. Dec 16 13:05:33.107629 containerd[1705]: time="2025-12-16T13:05:33.107580191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:33.109933 containerd[1705]: time="2025-12-16T13:05:33.109901999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Dec 16 13:05:33.112653 containerd[1705]: time="2025-12-16T13:05:33.112616128Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:33.116398 containerd[1705]: time="2025-12-16T13:05:33.116355932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:33.117177 containerd[1705]: time="2025-12-16T13:05:33.116798601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.067039ms" Dec 16 13:05:33.117177 containerd[1705]: time="2025-12-16T13:05:33.116826150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:05:33.117271 containerd[1705]: time="2025-12-16T13:05:33.117252803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:05:33.568550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963513618.mount: Deactivated successfully. Dec 16 13:05:35.316371 containerd[1705]: time="2025-12-16T13:05:35.316316447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.319148 containerd[1705]: time="2025-12-16T13:05:35.319118091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58925893" Dec 16 13:05:35.324366 containerd[1705]: time="2025-12-16T13:05:35.324316169Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.331574 containerd[1705]: time="2025-12-16T13:05:35.331539219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.332667 containerd[1705]: time="2025-12-16T13:05:35.332350420Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.215067807s" Dec 16 13:05:35.332667 containerd[1705]: time="2025-12-16T13:05:35.332379208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:05:35.858165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:05:35.859760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:36.336617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:36.342772 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:36.384312 kubelet[2642]: E1216 13:05:36.384277 2642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:36.388420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:36.388688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:36.389222 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.4M memory peak. Dec 16 13:05:38.853101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:38.853246 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.4M memory peak. Dec 16 13:05:38.855225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:38.880609 systemd[1]: Reload requested from client PID 2656 ('systemctl') (unit session-9.scope)... Dec 16 13:05:38.880625 systemd[1]: Reloading... Dec 16 13:05:38.971515 zram_generator::config[2699]: No configuration found. Dec 16 13:05:39.159850 systemd[1]: Reloading finished in 278 ms. Dec 16 13:05:39.301462 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:05:39.301567 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:05:39.301816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:39.301883 systemd[1]: kubelet.service: Consumed 75ms CPU time, 78M memory peak. Dec 16 13:05:39.304028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:39.845818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:39.850425 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:39.889500 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:39.889500 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:39.889500 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:39.889500 kubelet[2770]: I1216 13:05:39.888842 2770 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:40.469278 kubelet[2770]: I1216 13:05:40.468982 2770 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:05:40.469278 kubelet[2770]: I1216 13:05:40.469014 2770 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:40.469451 kubelet[2770]: I1216 13:05:40.469403 2770 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:40.498214 kubelet[2770]: I1216 13:05:40.498181 2770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:40.498607 kubelet[2770]: E1216 13:05:40.498585 2770 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:05:40.504157 kubelet[2770]: I1216 13:05:40.504121 2770 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:40.506682 kubelet[2770]: I1216 13:05:40.506652 2770 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:05:40.506927 kubelet[2770]: I1216 13:05:40.506892 2770 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:40.507088 kubelet[2770]: I1216 13:05:40.506924 2770 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-efe6a0b1f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:40.507202 kubelet[2770]: I1216 13:05:40.507092 2770 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:40.507202 kubelet[2770]: I1216 13:05:40.507101 2770 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:05:40.507202 kubelet[2770]: I1216 13:05:40.507200 2770 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:40.509855 kubelet[2770]: I1216 13:05:40.509476 2770 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:05:40.509855 kubelet[2770]: I1216 13:05:40.509507 2770 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:40.509855 kubelet[2770]: I1216 13:05:40.509535 2770 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:05:40.511001 kubelet[2770]: I1216 13:05:40.510987 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:40.521415 kubelet[2770]: E1216 13:05:40.521352 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-efe6a0b1f4&limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:40.521587 kubelet[2770]: I1216 13:05:40.521577 2770 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:05:40.522063 kubelet[2770]: I1216 13:05:40.522054 2770 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:40.523816 kubelet[2770]: W1216 13:05:40.523803 2770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:05:40.527254 kubelet[2770]: E1216 13:05:40.527229 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:05:40.527529 kubelet[2770]: I1216 13:05:40.527513 2770 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:05:40.527573 kubelet[2770]: I1216 13:05:40.527556 2770 server.go:1289] "Started kubelet" Dec 16 13:05:40.534508 kubelet[2770]: I1216 13:05:40.533170 2770 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:40.534764 kubelet[2770]: I1216 13:05:40.534751 2770 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:05:40.535791 kubelet[2770]: I1216 13:05:40.535769 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:40.538123 kubelet[2770]: I1216 13:05:40.538060 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:40.538399 kubelet[2770]: I1216 13:05:40.538387 2770 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:40.540063 kubelet[2770]: E1216 13:05:40.538804 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-efe6a0b1f4.1881b3e89b6fd91d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-efe6a0b1f4,UID:ci-4459.2.2-a-efe6a0b1f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-efe6a0b1f4,},FirstTimestamp:2025-12-16 13:05:40.527528221 +0000 UTC m=+0.672243785,LastTimestamp:2025-12-16 13:05:40.527528221 +0000 UTC m=+0.672243785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-efe6a0b1f4,}" Dec 16 13:05:40.540910 kubelet[2770]: I1216 13:05:40.540891 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:40.542093 kubelet[2770]: I1216 13:05:40.542073 2770 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:05:40.542286 kubelet[2770]: E1216 13:05:40.542269 2770 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:40.542782 kubelet[2770]: I1216 13:05:40.542766 2770 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:05:40.542836 kubelet[2770]: I1216 13:05:40.542821 2770 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:05:40.543137 kubelet[2770]: E1216 13:05:40.543120 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:05:40.543199 kubelet[2770]: E1216 13:05:40.543179 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-efe6a0b1f4?timeout=10s\": dial tcp 10.200.0.43:6443: connect: connection refused" interval="200ms" Dec 16 13:05:40.544816 kubelet[2770]: I1216 13:05:40.544792 2770 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:40.547306 kubelet[2770]: I1216 13:05:40.547016 2770 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:40.547306 kubelet[2770]: I1216 13:05:40.547029 2770 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:40.562421 kubelet[2770]: E1216 13:05:40.562402 2770 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:40.566767 kubelet[2770]: I1216 13:05:40.566741 2770 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:40.566767 kubelet[2770]: I1216 13:05:40.566766 2770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:40.566851 kubelet[2770]: I1216 13:05:40.566778 2770 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:40.572417 kubelet[2770]: I1216 13:05:40.572366 2770 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:40.574682 kubelet[2770]: I1216 13:05:40.573809 2770 policy_none.go:49] "None policy: Start" Dec 16 13:05:40.574682 kubelet[2770]: I1216 13:05:40.573829 2770 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:05:40.574682 kubelet[2770]: I1216 13:05:40.573839 2770 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:05:40.575495 kubelet[2770]: I1216 13:05:40.575460 2770 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:40.575547 kubelet[2770]: I1216 13:05:40.575524 2770 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:05:40.575547 kubelet[2770]: I1216 13:05:40.575542 2770 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:40.577086 kubelet[2770]: I1216 13:05:40.575549 2770 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:05:40.577086 kubelet[2770]: E1216 13:05:40.575580 2770 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:40.577929 kubelet[2770]: E1216 13:05:40.577908 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:05:40.582837 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:05:40.595180 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:05:40.597694 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:05:40.609008 kubelet[2770]: E1216 13:05:40.608988 2770 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:40.609008 kubelet[2770]: I1216 13:05:40.609135 2770 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:40.609008 kubelet[2770]: I1216 13:05:40.609144 2770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:40.609685 kubelet[2770]: I1216 13:05:40.609542 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:40.610804 kubelet[2770]: E1216 13:05:40.610786 2770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:40.610868 kubelet[2770]: E1216 13:05:40.610829 2770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:40.688910 systemd[1]: Created slice kubepods-burstable-pod19a23d0c85ed28c26c267aeb81ffe648.slice - libcontainer container kubepods-burstable-pod19a23d0c85ed28c26c267aeb81ffe648.slice. Dec 16 13:05:40.695061 kubelet[2770]: E1216 13:05:40.695025 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.699448 systemd[1]: Created slice kubepods-burstable-pod45e0b5c0a2b8fe9080769bcf396b886f.slice - libcontainer container kubepods-burstable-pod45e0b5c0a2b8fe9080769bcf396b886f.slice. Dec 16 13:05:40.701603 kubelet[2770]: E1216 13:05:40.701582 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.703558 systemd[1]: Created slice kubepods-burstable-pod24677b3bd09f7a0653f64772af8018cd.slice - libcontainer container kubepods-burstable-pod24677b3bd09f7a0653f64772af8018cd.slice. Dec 16 13:05:40.705185 kubelet[2770]: E1216 13:05:40.705163 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.710342 kubelet[2770]: I1216 13:05:40.710325 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.710652 kubelet[2770]: E1216 13:05:40.710633 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.43:6443/api/v1/nodes\": dial tcp 10.200.0.43:6443: connect: connection refused" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744027 kubelet[2770]: I1216 13:05:40.743224 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744027 kubelet[2770]: I1216 13:05:40.743255 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744027 kubelet[2770]: I1216 13:05:40.743276 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45e0b5c0a2b8fe9080769bcf396b886f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"45e0b5c0a2b8fe9080769bcf396b886f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744027 kubelet[2770]: I1216 13:05:40.743292 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744027 kubelet[2770]: I1216 13:05:40.743674 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744320 kubelet[2770]: I1216 13:05:40.743699 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744320 kubelet[2770]: I1216 13:05:40.743718 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744320 kubelet[2770]: I1216 13:05:40.743755 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744320 kubelet[2770]: I1216 13:05:40.743774 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.744320 kubelet[2770]: E1216 13:05:40.744172 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-efe6a0b1f4?timeout=10s\": dial tcp 10.200.0.43:6443: connect: connection refused" interval="400ms" Dec 16 13:05:40.912220 kubelet[2770]: I1216 13:05:40.912184 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.912645 kubelet[2770]: E1216 13:05:40.912504 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.43:6443/api/v1/nodes\": dial tcp 10.200.0.43:6443: connect: connection refused" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:40.996897 containerd[1705]: time="2025-12-16T13:05:40.996779804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4,Uid:19a23d0c85ed28c26c267aeb81ffe648,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:41.002395 containerd[1705]: time="2025-12-16T13:05:41.002354857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-efe6a0b1f4,Uid:45e0b5c0a2b8fe9080769bcf396b886f,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:41.006018 containerd[1705]: time="2025-12-16T13:05:41.005989386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-efe6a0b1f4,Uid:24677b3bd09f7a0653f64772af8018cd,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:41.145299 kubelet[2770]: E1216 13:05:41.145254 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-efe6a0b1f4?timeout=10s\": dial tcp 10.200.0.43:6443: connect: connection refused" interval="800ms" Dec 16 13:05:41.314344 kubelet[2770]: I1216 13:05:41.314242 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:41.314718 kubelet[2770]: E1216 13:05:41.314608 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.43:6443/api/v1/nodes\": dial tcp 10.200.0.43:6443: connect: connection refused" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:41.423539 kubelet[2770]: E1216 13:05:41.423494 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:05:41.510066 kubelet[2770]: E1216 13:05:41.510026 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:05:41.552501 kubelet[2770]: E1216 13:05:41.552436 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:05:41.589789 containerd[1705]: time="2025-12-16T13:05:41.589699559Z" level=info msg="connecting to shim 2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10" address="unix:///run/containerd/s/fb93bba1fcbc5cb4a217c03eea1e8ed4eebe25deaca318311b83ae4e4ea5061f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:41.612639 systemd[1]: Started cri-containerd-2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10.scope - libcontainer container 2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10. Dec 16 13:05:41.700909 containerd[1705]: time="2025-12-16T13:05:41.699763081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4,Uid:19a23d0c85ed28c26c267aeb81ffe648,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10\"" Dec 16 13:05:41.708885 containerd[1705]: time="2025-12-16T13:05:41.708833712Z" level=info msg="connecting to shim 4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7" address="unix:///run/containerd/s/66473c6f4fe885ba58e9b9c1b2196b539f620bf4fdc4b16ad3700e160e80f25e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:41.709440 containerd[1705]: time="2025-12-16T13:05:41.709405387Z" level=info msg="connecting to shim 7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58" address="unix:///run/containerd/s/c2fccf397981f554f1daf06ff4bac573f412086895f115b2b45630a181e10cd7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:41.714936 kubelet[2770]: E1216 13:05:41.714889 2770 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-efe6a0b1f4&limit=500&resourceVersion=0\": dial tcp 10.200.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:41.741646 systemd[1]: Started cri-containerd-4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7.scope - libcontainer container 4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7. Dec 16 13:05:41.743464 systemd[1]: Started cri-containerd-7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58.scope - libcontainer container 7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58. Dec 16 13:05:41.793860 containerd[1705]: time="2025-12-16T13:05:41.793727128Z" level=info msg="CreateContainer within sandbox \"2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:05:41.945868 kubelet[2770]: E1216 13:05:41.945827 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-efe6a0b1f4?timeout=10s\": dial tcp 10.200.0.43:6443: connect: connection refused" interval="1.6s" Dec 16 13:05:42.116161 kubelet[2770]: I1216 13:05:42.116129 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:42.116522 kubelet[2770]: E1216 13:05:42.116495 2770 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.43:6443/api/v1/nodes\": dial tcp 10.200.0.43:6443: connect: connection refused" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:42.186670 containerd[1705]: time="2025-12-16T13:05:42.186626652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-efe6a0b1f4,Uid:24677b3bd09f7a0653f64772af8018cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58\"" Dec 16 13:05:42.265376 containerd[1705]: time="2025-12-16T13:05:42.242501972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-efe6a0b1f4,Uid:45e0b5c0a2b8fe9080769bcf396b886f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7\"" Dec 16 13:05:42.289370 containerd[1705]: time="2025-12-16T13:05:42.289334820Z" level=info msg="CreateContainer within sandbox \"7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:05:42.294540 containerd[1705]: time="2025-12-16T13:05:42.294511608Z" level=info msg="Container bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:42.295496 containerd[1705]: time="2025-12-16T13:05:42.295275874Z" level=info msg="CreateContainer within sandbox \"4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:05:42.326529 containerd[1705]: time="2025-12-16T13:05:42.326471802Z" level=info msg="Container 5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:42.339249 containerd[1705]: time="2025-12-16T13:05:42.339215792Z" level=info msg="CreateContainer within sandbox \"2fd094fcc83a848a504c60ab14d07147553e3f3ab127d6560ccec3bba478ab10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0\"" Dec 16 13:05:42.339900 containerd[1705]: time="2025-12-16T13:05:42.339876545Z" level=info msg="StartContainer for \"bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0\"" Dec 16 13:05:42.340749 containerd[1705]: time="2025-12-16T13:05:42.340710657Z" level=info msg="connecting to shim bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0" address="unix:///run/containerd/s/fb93bba1fcbc5cb4a217c03eea1e8ed4eebe25deaca318311b83ae4e4ea5061f" protocol=ttrpc version=3 Dec 16 13:05:42.350997 containerd[1705]: time="2025-12-16T13:05:42.350901195Z" level=info msg="CreateContainer within sandbox \"7490c3fb7af2705d394c26121a3980b87818837993036009cfb9014f41907e58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771\"" Dec 16 13:05:42.352339 containerd[1705]: time="2025-12-16T13:05:42.352313030Z" level=info msg="StartContainer for \"5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771\"" Dec 16 13:05:42.356504 containerd[1705]: time="2025-12-16T13:05:42.356435859Z" level=info msg="Container 6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:42.358813 systemd[1]: Started cri-containerd-bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0.scope - libcontainer container bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0. Dec 16 13:05:42.360964 containerd[1705]: time="2025-12-16T13:05:42.360938638Z" level=info msg="connecting to shim 5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771" address="unix:///run/containerd/s/c2fccf397981f554f1daf06ff4bac573f412086895f115b2b45630a181e10cd7" protocol=ttrpc version=3 Dec 16 13:05:42.387282 containerd[1705]: time="2025-12-16T13:05:42.387243461Z" level=info msg="CreateContainer within sandbox \"4e78d7316e9757fe4127e0fbe97f8df13939851ba76ff47a0c347ce60ca54ae7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb\"" Dec 16 13:05:42.387759 systemd[1]: Started cri-containerd-5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771.scope - libcontainer container 5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771. Dec 16 13:05:42.388245 containerd[1705]: time="2025-12-16T13:05:42.388224382Z" level=info msg="StartContainer for \"6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb\"" Dec 16 13:05:42.389104 containerd[1705]: time="2025-12-16T13:05:42.389077814Z" level=info msg="connecting to shim 6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb" address="unix:///run/containerd/s/66473c6f4fe885ba58e9b9c1b2196b539f620bf4fdc4b16ad3700e160e80f25e" protocol=ttrpc version=3 Dec 16 13:05:42.413660 systemd[1]: Started cri-containerd-6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb.scope - libcontainer container 6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb. Dec 16 13:05:42.442054 containerd[1705]: time="2025-12-16T13:05:42.441955583Z" level=info msg="StartContainer for \"bf8e9ccc7e2039b5d016b535af88591925e04fb4864a470065762ad2553759f0\" returns successfully" Dec 16 13:05:42.487368 containerd[1705]: time="2025-12-16T13:05:42.487341818Z" level=info msg="StartContainer for \"5da1e0aa475d3de0e53f6150c63574de4884eb63f1618fd06d47695da76bc771\" returns successfully" Dec 16 13:05:42.496992 containerd[1705]: time="2025-12-16T13:05:42.496917758Z" level=info msg="StartContainer for \"6d30ca8f386149051bea60a6ad972ba0cbbeb8232710422b989c07763af54deb\" returns successfully" Dec 16 13:05:42.598571 kubelet[2770]: E1216 13:05:42.597807 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:42.601768 kubelet[2770]: E1216 13:05:42.601737 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:42.602958 kubelet[2770]: E1216 13:05:42.602937 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:43.606190 kubelet[2770]: E1216 13:05:43.605869 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:43.606190 kubelet[2770]: E1216 13:05:43.606110 2770 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:43.719311 kubelet[2770]: I1216 13:05:43.718917 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.235455 kubelet[2770]: E1216 13:05:44.235412 2770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-efe6a0b1f4\" not found" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.277587 kubelet[2770]: E1216 13:05:44.277311 2770 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.2-a-efe6a0b1f4.1881b3e89b6fd91d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-efe6a0b1f4,UID:ci-4459.2.2-a-efe6a0b1f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-efe6a0b1f4,},FirstTimestamp:2025-12-16 13:05:40.527528221 +0000 UTC m=+0.672243785,LastTimestamp:2025-12-16 13:05:40.527528221 +0000 UTC m=+0.672243785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-efe6a0b1f4,}" Dec 16 13:05:44.341875 kubelet[2770]: I1216 13:05:44.341536 2770 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.341875 kubelet[2770]: E1216 13:05:44.341574 2770 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-efe6a0b1f4\": node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:44.342725 kubelet[2770]: E1216 13:05:44.342617 2770 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.2-a-efe6a0b1f4.1881b3e89d83c07d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-efe6a0b1f4,UID:ci-4459.2.2-a-efe6a0b1f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-efe6a0b1f4,},FirstTimestamp:2025-12-16 13:05:40.562387069 +0000 UTC m=+0.707102638,LastTimestamp:2025-12-16 13:05:40.562387069 +0000 UTC m=+0.707102638,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-efe6a0b1f4,}" Dec 16 13:05:44.371945 kubelet[2770]: E1216 13:05:44.371914 2770 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:44.472402 kubelet[2770]: E1216 13:05:44.472367 2770 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:44.527954 kubelet[2770]: I1216 13:05:44.527413 2770 apiserver.go:52] "Watching apiserver" Dec 16 13:05:44.542682 kubelet[2770]: I1216 13:05:44.542602 2770 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.543027 kubelet[2770]: I1216 13:05:44.542993 2770 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:44.546921 kubelet[2770]: E1216 13:05:44.546893 2770 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.546921 kubelet[2770]: I1216 13:05:44.546914 2770 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.548171 kubelet[2770]: E1216 13:05:44.548148 2770 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-efe6a0b1f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.548171 kubelet[2770]: I1216 13:05:44.548171 2770 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:44.549446 kubelet[2770]: E1216 13:05:44.549424 2770 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:45.308839 kubelet[2770]: I1216 13:05:45.308800 2770 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:45.315566 kubelet[2770]: I1216 13:05:45.315534 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:46.296873 systemd[1]: Reload requested from client PID 3057 ('systemctl') (unit session-9.scope)... Dec 16 13:05:46.296887 systemd[1]: Reloading... Dec 16 13:05:46.351516 kubelet[2770]: I1216 13:05:46.350962 2770 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:46.360856 kubelet[2770]: I1216 13:05:46.360745 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:46.388518 zram_generator::config[3104]: No configuration found. Dec 16 13:05:46.578959 systemd[1]: Reloading finished in 281 ms. Dec 16 13:05:46.609759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:46.628457 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:05:46.628707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:46.628761 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 132.2M memory peak. Dec 16 13:05:46.630175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:47.132661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:47.136867 (kubelet)[3171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:47.182903 kubelet[3171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:47.183121 kubelet[3171]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:47.183121 kubelet[3171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:47.183209 kubelet[3171]: I1216 13:05:47.183179 3171 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:47.191086 kubelet[3171]: I1216 13:05:47.191034 3171 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:05:47.191086 kubelet[3171]: I1216 13:05:47.191059 3171 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:47.192723 kubelet[3171]: I1216 13:05:47.192702 3171 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:47.193893 kubelet[3171]: I1216 13:05:47.193626 3171 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:05:47.195423 kubelet[3171]: I1216 13:05:47.195245 3171 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:47.198464 kubelet[3171]: I1216 13:05:47.198450 3171 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:47.203635 kubelet[3171]: I1216 13:05:47.203616 3171 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:05:47.204109 kubelet[3171]: I1216 13:05:47.204044 3171 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:47.204232 kubelet[3171]: I1216 13:05:47.204074 3171 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-efe6a0b1f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:47.204334 kubelet[3171]: I1216 13:05:47.204234 3171 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:47.204334 kubelet[3171]: I1216 13:05:47.204244 3171 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:05:47.204334 kubelet[3171]: I1216 13:05:47.204288 3171 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:47.204433 kubelet[3171]: I1216 13:05:47.204417 3171 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:05:47.204462 kubelet[3171]: I1216 13:05:47.204436 3171 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:47.204462 kubelet[3171]: I1216 13:05:47.204460 3171 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:05:47.204526 kubelet[3171]: I1216 13:05:47.204473 3171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:47.208218 kubelet[3171]: I1216 13:05:47.207565 3171 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:05:47.208218 kubelet[3171]: I1216 13:05:47.208014 3171 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:47.217700 kubelet[3171]: I1216 13:05:47.217685 3171 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:05:47.217806 kubelet[3171]: I1216 13:05:47.217801 3171 server.go:1289] "Started kubelet" Dec 16 13:05:47.219792 kubelet[3171]: I1216 13:05:47.219778 3171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:47.220773 kubelet[3171]: I1216 13:05:47.220737 3171 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:47.221352 kubelet[3171]: I1216 13:05:47.221326 3171 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:05:47.228507 kubelet[3171]: I1216 13:05:47.227711 3171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:47.228507 kubelet[3171]: I1216 13:05:47.227918 3171 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:47.228507 kubelet[3171]: I1216 13:05:47.228148 3171 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:47.229438 kubelet[3171]: I1216 13:05:47.229426 3171 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:05:47.229676 kubelet[3171]: E1216 13:05:47.229664 3171 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-efe6a0b1f4\" not found" Dec 16 13:05:47.231552 kubelet[3171]: I1216 13:05:47.231340 3171 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:05:47.231725 kubelet[3171]: I1216 13:05:47.231714 3171 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:05:47.233901 kubelet[3171]: I1216 13:05:47.233877 3171 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:47.233978 kubelet[3171]: I1216 13:05:47.233958 3171 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:47.240133 kubelet[3171]: E1216 13:05:47.239830 3171 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:47.240216 kubelet[3171]: I1216 13:05:47.240135 3171 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:47.248159 kubelet[3171]: I1216 13:05:47.248138 3171 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:47.250300 kubelet[3171]: I1216 13:05:47.250264 3171 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:47.250300 kubelet[3171]: I1216 13:05:47.250279 3171 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:05:47.250300 kubelet[3171]: I1216 13:05:47.250294 3171 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:47.250300 kubelet[3171]: I1216 13:05:47.250301 3171 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:05:47.251843 kubelet[3171]: E1216 13:05:47.250334 3171 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:47.276929 kubelet[3171]: I1216 13:05:47.276911 3171 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:47.276929 kubelet[3171]: I1216 13:05:47.276923 3171 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:47.277027 kubelet[3171]: I1216 13:05:47.276940 3171 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:47.277052 kubelet[3171]: I1216 13:05:47.277046 3171 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:05:47.277081 kubelet[3171]: I1216 13:05:47.277056 3171 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:05:47.277081 kubelet[3171]: I1216 13:05:47.277080 3171 policy_none.go:49] "None policy: Start" Dec 16 13:05:47.277125 kubelet[3171]: I1216 13:05:47.277090 3171 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:05:47.277125 kubelet[3171]: I1216 13:05:47.277098 3171 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:05:47.277190 kubelet[3171]: I1216 13:05:47.277178 3171 state_mem.go:75] "Updated machine memory state" Dec 16 13:05:47.281121 kubelet[3171]: E1216 13:05:47.280840 3171 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:47.281121 kubelet[3171]: I1216 13:05:47.280974 3171 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:47.281121 kubelet[3171]: I1216 13:05:47.280984 3171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:47.281860 kubelet[3171]: I1216 13:05:47.281844 3171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:47.283326 kubelet[3171]: E1216 13:05:47.283280 3171 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:47.351880 kubelet[3171]: I1216 13:05:47.351314 3171 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.351880 kubelet[3171]: I1216 13:05:47.351647 3171 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.352273 kubelet[3171]: I1216 13:05:47.352175 3171 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.363868 kubelet[3171]: I1216 13:05:47.362996 3171 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:47.363868 kubelet[3171]: I1216 13:05:47.363051 3171 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:47.363868 kubelet[3171]: E1216 13:05:47.363090 3171 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-efe6a0b1f4\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.363868 kubelet[3171]: I1216 13:05:47.363837 3171 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:47.363868 kubelet[3171]: E1216 13:05:47.363874 3171 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.385220 kubelet[3171]: I1216 13:05:47.383814 3171 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.393897 kubelet[3171]: I1216 13:05:47.393139 3171 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.393897 kubelet[3171]: I1216 13:05:47.393206 3171 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433510 kubelet[3171]: I1216 13:05:47.432949 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433510 kubelet[3171]: I1216 13:05:47.432986 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433510 kubelet[3171]: I1216 13:05:47.433008 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433510 kubelet[3171]: I1216 13:05:47.433029 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433510 kubelet[3171]: I1216 13:05:47.433049 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45e0b5c0a2b8fe9080769bcf396b886f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"45e0b5c0a2b8fe9080769bcf396b886f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433720 kubelet[3171]: I1216 13:05:47.433067 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24677b3bd09f7a0653f64772af8018cd-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"24677b3bd09f7a0653f64772af8018cd\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433720 kubelet[3171]: I1216 13:05:47.433090 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433720 kubelet[3171]: I1216 13:05:47.433111 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:47.433720 kubelet[3171]: I1216 13:05:47.433130 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19a23d0c85ed28c26c267aeb81ffe648-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4\" (UID: \"19a23d0c85ed28c26c267aeb81ffe648\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:48.212042 kubelet[3171]: I1216 13:05:48.211789 3171 apiserver.go:52] "Watching apiserver" Dec 16 13:05:48.232513 kubelet[3171]: I1216 13:05:48.232368 3171 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:48.270122 kubelet[3171]: I1216 13:05:48.269851 3171 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:48.278866 kubelet[3171]: I1216 13:05:48.278840 3171 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:48.278969 kubelet[3171]: E1216 13:05:48.278911 3171 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-efe6a0b1f4\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:05:48.303404 kubelet[3171]: I1216 13:05:48.303119 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-efe6a0b1f4" podStartSLOduration=3.303105082 podStartE2EDuration="3.303105082s" podCreationTimestamp="2025-12-16 13:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:48.293709885 +0000 UTC m=+1.151927533" watchObservedRunningTime="2025-12-16 13:05:48.303105082 +0000 UTC m=+1.161322730" Dec 16 13:05:48.314102 kubelet[3171]: I1216 13:05:48.314063 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-efe6a0b1f4" podStartSLOduration=1.314048683 podStartE2EDuration="1.314048683s" podCreationTimestamp="2025-12-16 13:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:48.303324052 +0000 UTC m=+1.161541704" watchObservedRunningTime="2025-12-16 13:05:48.314048683 +0000 UTC m=+1.172266334" Dec 16 13:05:48.329644 kubelet[3171]: I1216 13:05:48.329578 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-efe6a0b1f4" podStartSLOduration=2.329546118 podStartE2EDuration="2.329546118s" podCreationTimestamp="2025-12-16 13:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:48.314377596 +0000 UTC m=+1.172595249" watchObservedRunningTime="2025-12-16 13:05:48.329546118 +0000 UTC m=+1.187763753" Dec 16 13:05:51.586923 systemd[1]: Created slice kubepods-besteffort-pod8d8185bd_7bd3_411a_b6e3_36f48b742299.slice - libcontainer container kubepods-besteffort-pod8d8185bd_7bd3_411a_b6e3_36f48b742299.slice. Dec 16 13:05:51.617021 kubelet[3171]: I1216 13:05:51.616983 3171 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:05:51.617649 containerd[1705]: time="2025-12-16T13:05:51.617620536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:05:51.617899 kubelet[3171]: I1216 13:05:51.617808 3171 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:05:51.656302 kubelet[3171]: I1216 13:05:51.656266 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-proxy\") pod \"kube-proxy-jgqbz\" (UID: \"8d8185bd-7bd3-411a-b6e3-36f48b742299\") " pod="kube-system/kube-proxy-jgqbz" Dec 16 13:05:51.656302 kubelet[3171]: I1216 13:05:51.656307 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d8185bd-7bd3-411a-b6e3-36f48b742299-xtables-lock\") pod \"kube-proxy-jgqbz\" (UID: \"8d8185bd-7bd3-411a-b6e3-36f48b742299\") " pod="kube-system/kube-proxy-jgqbz" Dec 16 13:05:51.656435 kubelet[3171]: I1216 13:05:51.656420 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d8185bd-7bd3-411a-b6e3-36f48b742299-lib-modules\") pod \"kube-proxy-jgqbz\" (UID: \"8d8185bd-7bd3-411a-b6e3-36f48b742299\") " pod="kube-system/kube-proxy-jgqbz" Dec 16 13:05:51.656471 kubelet[3171]: I1216 13:05:51.656444 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw22g\" (UniqueName: \"kubernetes.io/projected/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-api-access-nw22g\") pod \"kube-proxy-jgqbz\" (UID: \"8d8185bd-7bd3-411a-b6e3-36f48b742299\") " pod="kube-system/kube-proxy-jgqbz" Dec 16 13:05:51.761257 kubelet[3171]: E1216 13:05:51.761221 3171 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:05:51.761257 kubelet[3171]: E1216 13:05:51.761249 3171 projected.go:194] Error preparing data for projected volume kube-api-access-nw22g for pod kube-system/kube-proxy-jgqbz: configmap "kube-root-ca.crt" not found Dec 16 13:05:51.761428 kubelet[3171]: E1216 13:05:51.761329 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-api-access-nw22g podName:8d8185bd-7bd3-411a-b6e3-36f48b742299 nodeName:}" failed. No retries permitted until 2025-12-16 13:05:52.261307042 +0000 UTC m=+5.119524690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nw22g" (UniqueName: "kubernetes.io/projected/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-api-access-nw22g") pod "kube-proxy-jgqbz" (UID: "8d8185bd-7bd3-411a-b6e3-36f48b742299") : configmap "kube-root-ca.crt" not found Dec 16 13:05:52.362030 kubelet[3171]: E1216 13:05:52.361981 3171 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:05:52.362030 kubelet[3171]: E1216 13:05:52.362031 3171 projected.go:194] Error preparing data for projected volume kube-api-access-nw22g for pod kube-system/kube-proxy-jgqbz: configmap "kube-root-ca.crt" not found Dec 16 13:05:52.362215 kubelet[3171]: E1216 13:05:52.362099 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-api-access-nw22g podName:8d8185bd-7bd3-411a-b6e3-36f48b742299 nodeName:}" failed. No retries permitted until 2025-12-16 13:05:53.362070234 +0000 UTC m=+6.220287878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nw22g" (UniqueName: "kubernetes.io/projected/8d8185bd-7bd3-411a-b6e3-36f48b742299-kube-api-access-nw22g") pod "kube-proxy-jgqbz" (UID: "8d8185bd-7bd3-411a-b6e3-36f48b742299") : configmap "kube-root-ca.crt" not found Dec 16 13:05:52.836371 systemd[1]: Created slice kubepods-besteffort-pod8dbda9d3_328c_4192_a99c_1d9ba354e0b2.slice - libcontainer container kubepods-besteffort-pod8dbda9d3_328c_4192_a99c_1d9ba354e0b2.slice. Dec 16 13:05:52.864818 kubelet[3171]: I1216 13:05:52.864789 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8dbda9d3-328c-4192-a99c-1d9ba354e0b2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kvmbm\" (UID: \"8dbda9d3-328c-4192-a99c-1d9ba354e0b2\") " pod="tigera-operator/tigera-operator-7dcd859c48-kvmbm" Dec 16 13:05:52.865090 kubelet[3171]: I1216 13:05:52.864850 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwk8\" (UniqueName: \"kubernetes.io/projected/8dbda9d3-328c-4192-a99c-1d9ba354e0b2-kube-api-access-ktwk8\") pod \"tigera-operator-7dcd859c48-kvmbm\" (UID: \"8dbda9d3-328c-4192-a99c-1d9ba354e0b2\") " pod="tigera-operator/tigera-operator-7dcd859c48-kvmbm" Dec 16 13:05:53.139962 containerd[1705]: time="2025-12-16T13:05:53.139916580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kvmbm,Uid:8dbda9d3-328c-4192-a99c-1d9ba354e0b2,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:05:53.176997 containerd[1705]: time="2025-12-16T13:05:53.175821016Z" level=info msg="connecting to shim f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f" address="unix:///run/containerd/s/7256bb45bd7d8fee18429ddc15cae71ecfefda05b9590c2074a9468317651892" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:53.202626 systemd[1]: Started cri-containerd-f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f.scope - libcontainer container f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f. Dec 16 13:05:53.243920 containerd[1705]: time="2025-12-16T13:05:53.243888334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kvmbm,Uid:8dbda9d3-328c-4192-a99c-1d9ba354e0b2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f\"" Dec 16 13:05:53.245583 containerd[1705]: time="2025-12-16T13:05:53.245557217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:05:53.397211 containerd[1705]: time="2025-12-16T13:05:53.397118429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgqbz,Uid:8d8185bd-7bd3-411a-b6e3-36f48b742299,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:53.434583 containerd[1705]: time="2025-12-16T13:05:53.434540540Z" level=info msg="connecting to shim 2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600" address="unix:///run/containerd/s/16e84b333f81a3e8330d059acc4456096b994cd647304fb8f87d4006f1fa684f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:53.451686 systemd[1]: Started cri-containerd-2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600.scope - libcontainer container 2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600. Dec 16 13:05:53.475971 containerd[1705]: time="2025-12-16T13:05:53.475933635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgqbz,Uid:8d8185bd-7bd3-411a-b6e3-36f48b742299,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600\"" Dec 16 13:05:53.485658 containerd[1705]: time="2025-12-16T13:05:53.485621398Z" level=info msg="CreateContainer within sandbox \"2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:05:53.502978 containerd[1705]: time="2025-12-16T13:05:53.502946505Z" level=info msg="Container 1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:53.519347 containerd[1705]: time="2025-12-16T13:05:53.519315406Z" level=info msg="CreateContainer within sandbox \"2f36eb658f8c1854be87f0738b97a2705bd77acae6c17b4cefb7e2df192be600\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb\"" Dec 16 13:05:53.521195 containerd[1705]: time="2025-12-16T13:05:53.519835420Z" level=info msg="StartContainer for \"1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb\"" Dec 16 13:05:53.522142 containerd[1705]: time="2025-12-16T13:05:53.522114123Z" level=info msg="connecting to shim 1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb" address="unix:///run/containerd/s/16e84b333f81a3e8330d059acc4456096b994cd647304fb8f87d4006f1fa684f" protocol=ttrpc version=3 Dec 16 13:05:53.540669 systemd[1]: Started cri-containerd-1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb.scope - libcontainer container 1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb. Dec 16 13:05:53.594509 containerd[1705]: time="2025-12-16T13:05:53.594443347Z" level=info msg="StartContainer for \"1f7efcec29ffddce031a6d6b1039c6284b57f117c2ef00218ade7abb5099bfbb\" returns successfully" Dec 16 13:05:54.297270 kubelet[3171]: I1216 13:05:54.297213 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jgqbz" podStartSLOduration=3.297197384 podStartE2EDuration="3.297197384s" podCreationTimestamp="2025-12-16 13:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:54.296915997 +0000 UTC m=+7.155133646" watchObservedRunningTime="2025-12-16 13:05:54.297197384 +0000 UTC m=+7.155415036" Dec 16 13:05:54.925693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960951819.mount: Deactivated successfully. Dec 16 13:05:55.449285 containerd[1705]: time="2025-12-16T13:05:55.449240335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.451506 containerd[1705]: time="2025-12-16T13:05:55.451415468Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:05:55.453992 containerd[1705]: time="2025-12-16T13:05:55.453948161Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.457303 containerd[1705]: time="2025-12-16T13:05:55.457261074Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:55.457819 containerd[1705]: time="2025-12-16T13:05:55.457702338Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.212112602s" Dec 16 13:05:55.457819 containerd[1705]: time="2025-12-16T13:05:55.457731491Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:05:55.466062 containerd[1705]: time="2025-12-16T13:05:55.466031375Z" level=info msg="CreateContainer within sandbox \"f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:05:55.490970 containerd[1705]: time="2025-12-16T13:05:55.490345831Z" level=info msg="Container 88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:55.506330 containerd[1705]: time="2025-12-16T13:05:55.506297082Z" level=info msg="CreateContainer within sandbox \"f2c67b2e42646242f9062aad2f76c1f8ea58d39610ddedc24dc87fe36b7ef95f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c\"" Dec 16 13:05:55.506807 containerd[1705]: time="2025-12-16T13:05:55.506762813Z" level=info msg="StartContainer for \"88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c\"" Dec 16 13:05:55.508168 containerd[1705]: time="2025-12-16T13:05:55.508138941Z" level=info msg="connecting to shim 88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c" address="unix:///run/containerd/s/7256bb45bd7d8fee18429ddc15cae71ecfefda05b9590c2074a9468317651892" protocol=ttrpc version=3 Dec 16 13:05:55.528653 systemd[1]: Started cri-containerd-88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c.scope - libcontainer container 88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c. Dec 16 13:05:55.562445 containerd[1705]: time="2025-12-16T13:05:55.561555233Z" level=info msg="StartContainer for \"88d66e308d731824b8981402240ed873da9ee2fdf94718e8ffd28c0d7c8e976c\" returns successfully" Dec 16 13:05:56.311501 kubelet[3171]: I1216 13:05:56.311430 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kvmbm" podStartSLOduration=2.097983568 podStartE2EDuration="4.311413269s" podCreationTimestamp="2025-12-16 13:05:52 +0000 UTC" firstStartedPulling="2025-12-16 13:05:53.24507182 +0000 UTC m=+6.103289461" lastFinishedPulling="2025-12-16 13:05:55.458501509 +0000 UTC m=+8.316719162" observedRunningTime="2025-12-16 13:05:56.311260879 +0000 UTC m=+9.169478555" watchObservedRunningTime="2025-12-16 13:05:56.311413269 +0000 UTC m=+9.169630924" Dec 16 13:05:56.330464 waagent[1896]: 2025-12-16T13:05:56.330417Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Dec 16 13:05:56.336336 waagent[1896]: 2025-12-16T13:05:56.336306Z INFO ExtHandler Dec 16 13:05:56.336505 waagent[1896]: 2025-12-16T13:05:56.336389Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4ab6db92-b45f-4a19-896d-9e473d62305c eTag: 13389710524637690877 source: Fabric] Dec 16 13:05:56.336691 waagent[1896]: 2025-12-16T13:05:56.336665Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:56.337081 waagent[1896]: 2025-12-16T13:05:56.337050Z INFO ExtHandler Dec 16 13:05:56.337137 waagent[1896]: 2025-12-16T13:05:56.337099Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Dec 16 13:05:56.402234 waagent[1896]: 2025-12-16T13:05:56.402199Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:05:56.477153 waagent[1896]: 2025-12-16T13:05:56.477093Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:05:56.477546 waagent[1896]: 2025-12-16T13:05:56.477474Z INFO ExtHandler Fetch goal state completed Dec 16 13:05:56.477820 waagent[1896]: 2025-12-16T13:05:56.477796Z INFO ExtHandler ExtHandler Dec 16 13:05:56.477862 waagent[1896]: 2025-12-16T13:05:56.477847Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c003b566-7eb6-4b18-95bf-583f939f7e29 correlation 58948a11-560f-4e43-ab6c-7b3c6be378fe created: 2025-12-16T13:05:47.367523Z] Dec 16 13:05:56.478051 waagent[1896]: 2025-12-16T13:05:56.478029Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:05:56.478446 waagent[1896]: 2025-12-16T13:05:56.478419Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Dec 16 13:06:01.296216 sudo[2154]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:01.384514 sshd[2153]: Connection closed by 10.200.16.10 port 60322 Dec 16 13:06:01.385035 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:01.391035 systemd-logind[1684]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:06:01.392085 systemd[1]: sshd@6-10.200.0.43:22-10.200.16.10:60322.service: Deactivated successfully. Dec 16 13:06:01.396189 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:06:01.396541 systemd[1]: session-9.scope: Consumed 4.830s CPU time, 231.1M memory peak. Dec 16 13:06:01.399584 systemd-logind[1684]: Removed session 9. Dec 16 13:06:06.125814 systemd[1]: Created slice kubepods-besteffort-podbf0bd7be_d807_46ef_94b6_c70bafded600.slice - libcontainer container kubepods-besteffort-podbf0bd7be_d807_46ef_94b6_c70bafded600.slice. Dec 16 13:06:06.137250 kubelet[3171]: I1216 13:06:06.137076 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf0bd7be-d807-46ef-94b6-c70bafded600-typha-certs\") pod \"calico-typha-574dc57bb5-zh8tg\" (UID: \"bf0bd7be-d807-46ef-94b6-c70bafded600\") " pod="calico-system/calico-typha-574dc57bb5-zh8tg" Dec 16 13:06:06.137250 kubelet[3171]: I1216 13:06:06.137121 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0bd7be-d807-46ef-94b6-c70bafded600-tigera-ca-bundle\") pod \"calico-typha-574dc57bb5-zh8tg\" (UID: \"bf0bd7be-d807-46ef-94b6-c70bafded600\") " pod="calico-system/calico-typha-574dc57bb5-zh8tg" Dec 16 13:06:06.137250 kubelet[3171]: I1216 13:06:06.137144 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75svj\" (UniqueName: \"kubernetes.io/projected/bf0bd7be-d807-46ef-94b6-c70bafded600-kube-api-access-75svj\") pod \"calico-typha-574dc57bb5-zh8tg\" (UID: \"bf0bd7be-d807-46ef-94b6-c70bafded600\") " pod="calico-system/calico-typha-574dc57bb5-zh8tg" Dec 16 13:06:06.312603 systemd[1]: Created slice kubepods-besteffort-poda39c8eae_c70b_47ba_822a_9e60a24a7036.slice - libcontainer container kubepods-besteffort-poda39c8eae_c70b_47ba_822a_9e60a24a7036.slice. Dec 16 13:06:06.339275 kubelet[3171]: I1216 13:06:06.339240 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-lib-modules\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339275 kubelet[3171]: I1216 13:06:06.339278 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a39c8eae-c70b-47ba-822a-9e60a24a7036-node-certs\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339417 kubelet[3171]: I1216 13:06:06.339294 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-policysync\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339417 kubelet[3171]: I1216 13:06:06.339312 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-flexvol-driver-host\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339417 kubelet[3171]: I1216 13:06:06.339328 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39c8eae-c70b-47ba-822a-9e60a24a7036-tigera-ca-bundle\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339417 kubelet[3171]: I1216 13:06:06.339354 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59j8\" (UniqueName: \"kubernetes.io/projected/a39c8eae-c70b-47ba-822a-9e60a24a7036-kube-api-access-t59j8\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339417 kubelet[3171]: I1216 13:06:06.339369 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-cni-log-dir\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339569 kubelet[3171]: I1216 13:06:06.339383 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-var-lib-calico\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339569 kubelet[3171]: I1216 13:06:06.339398 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-xtables-lock\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339569 kubelet[3171]: I1216 13:06:06.339415 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-cni-bin-dir\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339569 kubelet[3171]: I1216 13:06:06.339430 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-cni-net-dir\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.339569 kubelet[3171]: I1216 13:06:06.339446 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a39c8eae-c70b-47ba-822a-9e60a24a7036-var-run-calico\") pod \"calico-node-dq44b\" (UID: \"a39c8eae-c70b-47ba-822a-9e60a24a7036\") " pod="calico-system/calico-node-dq44b" Dec 16 13:06:06.433989 containerd[1705]: time="2025-12-16T13:06:06.433831553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574dc57bb5-zh8tg,Uid:bf0bd7be-d807-46ef-94b6-c70bafded600,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:06.443281 kubelet[3171]: E1216 13:06:06.443118 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.443281 kubelet[3171]: W1216 13:06:06.443143 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.443281 kubelet[3171]: E1216 13:06:06.443179 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.444364 kubelet[3171]: E1216 13:06:06.444349 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.447516 kubelet[3171]: W1216 13:06:06.447463 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.449559 kubelet[3171]: E1216 13:06:06.447578 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.449668 kubelet[3171]: E1216 13:06:06.449655 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.449747 kubelet[3171]: W1216 13:06:06.449714 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.449817 kubelet[3171]: E1216 13:06:06.449748 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.449989 kubelet[3171]: E1216 13:06:06.449976 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.450036 kubelet[3171]: W1216 13:06:06.449989 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.450036 kubelet[3171]: E1216 13:06:06.450000 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.452352 kubelet[3171]: E1216 13:06:06.451447 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.452352 kubelet[3171]: W1216 13:06:06.451465 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.452352 kubelet[3171]: E1216 13:06:06.451508 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.462007 kubelet[3171]: E1216 13:06:06.461976 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.462136 kubelet[3171]: W1216 13:06:06.462088 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.462136 kubelet[3171]: E1216 13:06:06.462102 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.482272 containerd[1705]: time="2025-12-16T13:06:06.482162670Z" level=info msg="connecting to shim d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29" address="unix:///run/containerd/s/af14b0e8529a6f6f211bd45d1225e6cb224f56d1ea4c40d7db5692dfde4eabc6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:06.511855 systemd[1]: Started cri-containerd-d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29.scope - libcontainer container d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29. Dec 16 13:06:06.514354 kubelet[3171]: E1216 13:06:06.513647 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:06.533720 kubelet[3171]: E1216 13:06:06.533701 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.533720 kubelet[3171]: W1216 13:06:06.533720 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.533829 kubelet[3171]: E1216 13:06:06.533761 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.533926 kubelet[3171]: E1216 13:06:06.533917 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.533952 kubelet[3171]: W1216 13:06:06.533927 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.533952 kubelet[3171]: E1216 13:06:06.533936 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.534089 kubelet[3171]: E1216 13:06:06.534079 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.534116 kubelet[3171]: W1216 13:06:06.534089 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.534116 kubelet[3171]: E1216 13:06:06.534097 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.534540 kubelet[3171]: E1216 13:06:06.534522 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.534540 kubelet[3171]: W1216 13:06:06.534541 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.534629 kubelet[3171]: E1216 13:06:06.534553 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.535907 kubelet[3171]: E1216 13:06:06.535885 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.535907 kubelet[3171]: W1216 13:06:06.535904 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.536013 kubelet[3171]: E1216 13:06:06.535918 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.536773 kubelet[3171]: E1216 13:06:06.536619 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.536773 kubelet[3171]: W1216 13:06:06.536633 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.536773 kubelet[3171]: E1216 13:06:06.536644 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.536935 kubelet[3171]: E1216 13:06:06.536799 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.536935 kubelet[3171]: W1216 13:06:06.536803 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.536935 kubelet[3171]: E1216 13:06:06.536810 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.536935 kubelet[3171]: E1216 13:06:06.536922 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.537157 kubelet[3171]: W1216 13:06:06.537144 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.537181 kubelet[3171]: E1216 13:06:06.537161 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.537758 kubelet[3171]: E1216 13:06:06.537738 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.537758 kubelet[3171]: W1216 13:06:06.537757 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.537846 kubelet[3171]: E1216 13:06:06.537768 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.539609 kubelet[3171]: E1216 13:06:06.539591 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.539609 kubelet[3171]: W1216 13:06:06.539608 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.539697 kubelet[3171]: E1216 13:06:06.539621 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.539803 kubelet[3171]: E1216 13:06:06.539796 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.539836 kubelet[3171]: W1216 13:06:06.539804 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.539836 kubelet[3171]: E1216 13:06:06.539814 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.539998 kubelet[3171]: E1216 13:06:06.539943 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.539998 kubelet[3171]: W1216 13:06:06.539953 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.539998 kubelet[3171]: E1216 13:06:06.539960 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.540199 kubelet[3171]: E1216 13:06:06.540192 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.540240 kubelet[3171]: W1216 13:06:06.540234 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.540277 kubelet[3171]: E1216 13:06:06.540271 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.540426 kubelet[3171]: E1216 13:06:06.540392 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.540426 kubelet[3171]: W1216 13:06:06.540399 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.540426 kubelet[3171]: E1216 13:06:06.540405 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.540625 kubelet[3171]: E1216 13:06:06.540588 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.540625 kubelet[3171]: W1216 13:06:06.540594 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.540625 kubelet[3171]: E1216 13:06:06.540601 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.540800 kubelet[3171]: E1216 13:06:06.540774 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.540800 kubelet[3171]: W1216 13:06:06.540780 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.540800 kubelet[3171]: E1216 13:06:06.540786 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.541016 kubelet[3171]: E1216 13:06:06.540982 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.541016 kubelet[3171]: W1216 13:06:06.540988 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.541016 kubelet[3171]: E1216 13:06:06.540995 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.541195 kubelet[3171]: E1216 13:06:06.541162 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.541195 kubelet[3171]: W1216 13:06:06.541168 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.541195 kubelet[3171]: E1216 13:06:06.541175 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.541393 kubelet[3171]: E1216 13:06:06.541358 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.541393 kubelet[3171]: W1216 13:06:06.541364 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.541393 kubelet[3171]: E1216 13:06:06.541371 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.541601 kubelet[3171]: E1216 13:06:06.541573 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.541601 kubelet[3171]: W1216 13:06:06.541581 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.541601 kubelet[3171]: E1216 13:06:06.541589 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.542312 kubelet[3171]: E1216 13:06:06.542263 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.542312 kubelet[3171]: W1216 13:06:06.542278 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.542312 kubelet[3171]: E1216 13:06:06.542296 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.542544 kubelet[3171]: I1216 13:06:06.542440 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ll5\" (UniqueName: \"kubernetes.io/projected/f3c338af-232e-46f3-9597-30d05ba9e1ec-kube-api-access-44ll5\") pod \"csi-node-driver-jf8nl\" (UID: \"f3c338af-232e-46f3-9597-30d05ba9e1ec\") " pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:06.542651 kubelet[3171]: E1216 13:06:06.542622 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.542651 kubelet[3171]: W1216 13:06:06.542633 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.542745 kubelet[3171]: E1216 13:06:06.542736 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.543603 kubelet[3171]: I1216 13:06:06.543587 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3c338af-232e-46f3-9597-30d05ba9e1ec-socket-dir\") pod \"csi-node-driver-jf8nl\" (UID: \"f3c338af-232e-46f3-9597-30d05ba9e1ec\") " pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:06.544103 kubelet[3171]: E1216 13:06:06.544091 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.544187 kubelet[3171]: W1216 13:06:06.544162 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.544187 kubelet[3171]: E1216 13:06:06.544176 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.544543 kubelet[3171]: E1216 13:06:06.544431 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.544543 kubelet[3171]: W1216 13:06:06.544444 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.544543 kubelet[3171]: E1216 13:06:06.544454 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.544657 kubelet[3171]: E1216 13:06:06.544622 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.544657 kubelet[3171]: W1216 13:06:06.544631 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.544657 kubelet[3171]: E1216 13:06:06.544640 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.544721 kubelet[3171]: I1216 13:06:06.544667 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3c338af-232e-46f3-9597-30d05ba9e1ec-registration-dir\") pod \"csi-node-driver-jf8nl\" (UID: \"f3c338af-232e-46f3-9597-30d05ba9e1ec\") " pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:06.544887 kubelet[3171]: E1216 13:06:06.544823 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.544887 kubelet[3171]: W1216 13:06:06.544833 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.544887 kubelet[3171]: E1216 13:06:06.544842 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.544887 kubelet[3171]: I1216 13:06:06.544863 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3c338af-232e-46f3-9597-30d05ba9e1ec-varrun\") pod \"csi-node-driver-jf8nl\" (UID: \"f3c338af-232e-46f3-9597-30d05ba9e1ec\") " pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:06.545058 kubelet[3171]: E1216 13:06:06.545047 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545058 kubelet[3171]: W1216 13:06:06.545056 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545117 kubelet[3171]: E1216 13:06:06.545063 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545203 kubelet[3171]: E1216 13:06:06.545193 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545203 kubelet[3171]: W1216 13:06:06.545202 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545255 kubelet[3171]: E1216 13:06:06.545208 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545357 kubelet[3171]: E1216 13:06:06.545347 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545357 kubelet[3171]: W1216 13:06:06.545355 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545429 kubelet[3171]: E1216 13:06:06.545362 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545517 kubelet[3171]: E1216 13:06:06.545508 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545517 kubelet[3171]: W1216 13:06:06.545516 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545578 kubelet[3171]: E1216 13:06:06.545523 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545664 kubelet[3171]: E1216 13:06:06.545653 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545664 kubelet[3171]: W1216 13:06:06.545661 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545721 kubelet[3171]: E1216 13:06:06.545668 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545721 kubelet[3171]: I1216 13:06:06.545698 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3c338af-232e-46f3-9597-30d05ba9e1ec-kubelet-dir\") pod \"csi-node-driver-jf8nl\" (UID: \"f3c338af-232e-46f3-9597-30d05ba9e1ec\") " pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:06.545852 kubelet[3171]: E1216 13:06:06.545839 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545852 kubelet[3171]: W1216 13:06:06.545848 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.545907 kubelet[3171]: E1216 13:06:06.545856 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.545988 kubelet[3171]: E1216 13:06:06.545977 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.545988 kubelet[3171]: W1216 13:06:06.545985 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.546046 kubelet[3171]: E1216 13:06:06.545992 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.546154 kubelet[3171]: E1216 13:06:06.546143 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.546154 kubelet[3171]: W1216 13:06:06.546151 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.546213 kubelet[3171]: E1216 13:06:06.546158 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.546301 kubelet[3171]: E1216 13:06:06.546291 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.546301 kubelet[3171]: W1216 13:06:06.546298 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.546301 kubelet[3171]: E1216 13:06:06.546307 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.570712 containerd[1705]: time="2025-12-16T13:06:06.570674068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574dc57bb5-zh8tg,Uid:bf0bd7be-d807-46ef-94b6-c70bafded600,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29\"" Dec 16 13:06:06.572700 containerd[1705]: time="2025-12-16T13:06:06.572522239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:06:06.621104 containerd[1705]: time="2025-12-16T13:06:06.620972512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dq44b,Uid:a39c8eae-c70b-47ba-822a-9e60a24a7036,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:06.646805 kubelet[3171]: E1216 13:06:06.646787 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.646929 kubelet[3171]: W1216 13:06:06.646893 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.646929 kubelet[3171]: E1216 13:06:06.646913 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.647147 kubelet[3171]: E1216 13:06:06.647134 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.647147 kubelet[3171]: W1216 13:06:06.647144 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.647281 kubelet[3171]: E1216 13:06:06.647152 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.647351 kubelet[3171]: E1216 13:06:06.647340 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.647351 kubelet[3171]: W1216 13:06:06.647348 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.647417 kubelet[3171]: E1216 13:06:06.647356 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.647541 kubelet[3171]: E1216 13:06:06.647530 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.647572 kubelet[3171]: W1216 13:06:06.647544 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.647572 kubelet[3171]: E1216 13:06:06.647551 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.647735 kubelet[3171]: E1216 13:06:06.647724 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.647735 kubelet[3171]: W1216 13:06:06.647732 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.647789 kubelet[3171]: E1216 13:06:06.647740 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.647898 kubelet[3171]: E1216 13:06:06.647889 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.647923 kubelet[3171]: W1216 13:06:06.647899 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.647923 kubelet[3171]: E1216 13:06:06.647907 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648022 kubelet[3171]: E1216 13:06:06.648013 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648022 kubelet[3171]: W1216 13:06:06.648019 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648126 kubelet[3171]: E1216 13:06:06.648025 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648168 kubelet[3171]: E1216 13:06:06.648137 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648168 kubelet[3171]: W1216 13:06:06.648142 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648168 kubelet[3171]: E1216 13:06:06.648148 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648315 kubelet[3171]: E1216 13:06:06.648308 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648315 kubelet[3171]: W1216 13:06:06.648315 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648360 kubelet[3171]: E1216 13:06:06.648321 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648474 kubelet[3171]: E1216 13:06:06.648452 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648474 kubelet[3171]: W1216 13:06:06.648459 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648474 kubelet[3171]: E1216 13:06:06.648465 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648618 kubelet[3171]: E1216 13:06:06.648599 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648618 kubelet[3171]: W1216 13:06:06.648605 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648618 kubelet[3171]: E1216 13:06:06.648611 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648791 kubelet[3171]: E1216 13:06:06.648751 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648791 kubelet[3171]: W1216 13:06:06.648756 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.648791 kubelet[3171]: E1216 13:06:06.648763 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.648915 kubelet[3171]: E1216 13:06:06.648906 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.648915 kubelet[3171]: W1216 13:06:06.648913 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.649006 kubelet[3171]: E1216 13:06:06.648919 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.649032 kubelet[3171]: E1216 13:06:06.649014 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.649032 kubelet[3171]: W1216 13:06:06.649020 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.649032 kubelet[3171]: E1216 13:06:06.649026 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.649148 kubelet[3171]: E1216 13:06:06.649138 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.649148 kubelet[3171]: W1216 13:06:06.649145 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.649212 kubelet[3171]: E1216 13:06:06.649154 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.649279 kubelet[3171]: E1216 13:06:06.649272 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.649279 kubelet[3171]: W1216 13:06:06.649277 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.649338 kubelet[3171]: E1216 13:06:06.649284 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.649427 kubelet[3171]: E1216 13:06:06.649388 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.649427 kubelet[3171]: W1216 13:06:06.649393 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.649427 kubelet[3171]: E1216 13:06:06.649400 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.649501 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.650629 kubelet[3171]: W1216 13:06:06.649505 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.649511 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.649636 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.650629 kubelet[3171]: W1216 13:06:06.649641 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.649646 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.650005 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.650629 kubelet[3171]: W1216 13:06:06.650017 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.650028 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.650629 kubelet[3171]: E1216 13:06:06.650203 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.650858 kubelet[3171]: W1216 13:06:06.650209 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.650858 kubelet[3171]: E1216 13:06:06.650216 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.650858 kubelet[3171]: E1216 13:06:06.650361 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.650858 kubelet[3171]: W1216 13:06:06.650367 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.650858 kubelet[3171]: E1216 13:06:06.650382 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.651304 kubelet[3171]: E1216 13:06:06.651281 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.651304 kubelet[3171]: W1216 13:06:06.651294 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.651577 kubelet[3171]: E1216 13:06:06.651308 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.651577 kubelet[3171]: E1216 13:06:06.651564 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.651577 kubelet[3171]: W1216 13:06:06.651572 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.651899 kubelet[3171]: E1216 13:06:06.651581 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.651899 kubelet[3171]: E1216 13:06:06.651810 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.651899 kubelet[3171]: W1216 13:06:06.651818 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.651899 kubelet[3171]: E1216 13:06:06.651828 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.656546 kubelet[3171]: E1216 13:06:06.656498 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:06.656546 kubelet[3171]: W1216 13:06:06.656512 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:06.656546 kubelet[3171]: E1216 13:06:06.656523 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:06.663077 containerd[1705]: time="2025-12-16T13:06:06.663046134Z" level=info msg="connecting to shim 55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e" address="unix:///run/containerd/s/af31f412639d8e9f6ec9f8f805c9dd5334e4418c7be66705ff6194339b53d68d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:06.685916 systemd[1]: Started cri-containerd-55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e.scope - libcontainer container 55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e. Dec 16 13:06:06.719796 containerd[1705]: time="2025-12-16T13:06:06.719772427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dq44b,Uid:a39c8eae-c70b-47ba-822a-9e60a24a7036,Namespace:calico-system,Attempt:0,} returns sandbox id \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\"" Dec 16 13:06:08.109161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747730142.mount: Deactivated successfully. Dec 16 13:06:08.251809 kubelet[3171]: E1216 13:06:08.251569 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:09.427085 containerd[1705]: time="2025-12-16T13:06:09.427041223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:09.429283 containerd[1705]: time="2025-12-16T13:06:09.429246095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:06:09.431853 containerd[1705]: time="2025-12-16T13:06:09.431802531Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:09.435506 containerd[1705]: time="2025-12-16T13:06:09.435368524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:09.436336 containerd[1705]: time="2025-12-16T13:06:09.436221563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.863666148s" Dec 16 13:06:09.436336 containerd[1705]: time="2025-12-16T13:06:09.436254715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:06:09.437943 containerd[1705]: time="2025-12-16T13:06:09.437735848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:06:09.456705 containerd[1705]: time="2025-12-16T13:06:09.456678643Z" level=info msg="CreateContainer within sandbox \"d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:06:09.480801 containerd[1705]: time="2025-12-16T13:06:09.479622484Z" level=info msg="Container c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:09.498667 containerd[1705]: time="2025-12-16T13:06:09.498643129Z" level=info msg="CreateContainer within sandbox \"d3c3ca84961cce4f317d6929edaea0f18538422ce7d35bea4f487d0cfa20ff29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54\"" Dec 16 13:06:09.499174 containerd[1705]: time="2025-12-16T13:06:09.499071884Z" level=info msg="StartContainer for \"c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54\"" Dec 16 13:06:09.500705 containerd[1705]: time="2025-12-16T13:06:09.500532154Z" level=info msg="connecting to shim c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54" address="unix:///run/containerd/s/af14b0e8529a6f6f211bd45d1225e6cb224f56d1ea4c40d7db5692dfde4eabc6" protocol=ttrpc version=3 Dec 16 13:06:09.524641 systemd[1]: Started cri-containerd-c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54.scope - libcontainer container c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54. Dec 16 13:06:09.573111 containerd[1705]: time="2025-12-16T13:06:09.573085294Z" level=info msg="StartContainer for \"c5d13038b58513dbe8db279a6b63cf0b23c39657ca61afd186d3803473b1df54\" returns successfully" Dec 16 13:06:10.250716 kubelet[3171]: E1216 13:06:10.250653 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:10.338821 kubelet[3171]: I1216 13:06:10.338679 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-574dc57bb5-zh8tg" podStartSLOduration=1.4727016640000001 podStartE2EDuration="4.338662258s" podCreationTimestamp="2025-12-16 13:06:06 +0000 UTC" firstStartedPulling="2025-12-16 13:06:06.571645777 +0000 UTC m=+19.429863414" lastFinishedPulling="2025-12-16 13:06:09.437606354 +0000 UTC m=+22.295824008" observedRunningTime="2025-12-16 13:06:10.336762948 +0000 UTC m=+23.194980599" watchObservedRunningTime="2025-12-16 13:06:10.338662258 +0000 UTC m=+23.196879969" Dec 16 13:06:10.364921 kubelet[3171]: E1216 13:06:10.364886 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.364921 kubelet[3171]: W1216 13:06:10.364913 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365081 kubelet[3171]: E1216 13:06:10.364930 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365081 kubelet[3171]: E1216 13:06:10.365045 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365081 kubelet[3171]: W1216 13:06:10.365050 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365081 kubelet[3171]: E1216 13:06:10.365057 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365185 kubelet[3171]: E1216 13:06:10.365151 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365185 kubelet[3171]: W1216 13:06:10.365156 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365185 kubelet[3171]: E1216 13:06:10.365162 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365305 kubelet[3171]: E1216 13:06:10.365284 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365305 kubelet[3171]: W1216 13:06:10.365291 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365305 kubelet[3171]: E1216 13:06:10.365297 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365400 kubelet[3171]: E1216 13:06:10.365389 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365400 kubelet[3171]: W1216 13:06:10.365394 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365466 kubelet[3171]: E1216 13:06:10.365400 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365516 kubelet[3171]: E1216 13:06:10.365503 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365516 kubelet[3171]: W1216 13:06:10.365508 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365516 kubelet[3171]: E1216 13:06:10.365514 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365611 kubelet[3171]: E1216 13:06:10.365606 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365635 kubelet[3171]: W1216 13:06:10.365611 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365635 kubelet[3171]: E1216 13:06:10.365617 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365721 kubelet[3171]: E1216 13:06:10.365705 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365721 kubelet[3171]: W1216 13:06:10.365716 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365721 kubelet[3171]: E1216 13:06:10.365722 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365821 kubelet[3171]: E1216 13:06:10.365817 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365850 kubelet[3171]: W1216 13:06:10.365822 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365850 kubelet[3171]: E1216 13:06:10.365827 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.365914 kubelet[3171]: E1216 13:06:10.365908 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.365914 kubelet[3171]: W1216 13:06:10.365912 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.365973 kubelet[3171]: E1216 13:06:10.365918 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.366001 kubelet[3171]: E1216 13:06:10.365994 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.366001 kubelet[3171]: W1216 13:06:10.365999 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.366061 kubelet[3171]: E1216 13:06:10.366004 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.366092 kubelet[3171]: E1216 13:06:10.366086 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.366092 kubelet[3171]: W1216 13:06:10.366090 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.366150 kubelet[3171]: E1216 13:06:10.366096 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.366208 kubelet[3171]: E1216 13:06:10.366181 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.366208 kubelet[3171]: W1216 13:06:10.366195 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.366208 kubelet[3171]: E1216 13:06:10.366200 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.366292 kubelet[3171]: E1216 13:06:10.366285 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.366292 kubelet[3171]: W1216 13:06:10.366289 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.366341 kubelet[3171]: E1216 13:06:10.366294 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.366511 kubelet[3171]: E1216 13:06:10.366426 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.366511 kubelet[3171]: W1216 13:06:10.366442 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.366511 kubelet[3171]: E1216 13:06:10.366459 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.371673 kubelet[3171]: E1216 13:06:10.371659 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.371673 kubelet[3171]: W1216 13:06:10.371669 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.371751 kubelet[3171]: E1216 13:06:10.371680 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.371822 kubelet[3171]: E1216 13:06:10.371808 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.371822 kubelet[3171]: W1216 13:06:10.371817 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.371871 kubelet[3171]: E1216 13:06:10.371824 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.371986 kubelet[3171]: E1216 13:06:10.371963 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372017 kubelet[3171]: W1216 13:06:10.371984 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372017 kubelet[3171]: E1216 13:06:10.372012 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.372181 kubelet[3171]: E1216 13:06:10.372169 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372208 kubelet[3171]: W1216 13:06:10.372189 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372208 kubelet[3171]: E1216 13:06:10.372197 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.372305 kubelet[3171]: E1216 13:06:10.372294 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372305 kubelet[3171]: W1216 13:06:10.372301 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372426 kubelet[3171]: E1216 13:06:10.372307 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.372426 kubelet[3171]: E1216 13:06:10.372409 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372426 kubelet[3171]: W1216 13:06:10.372414 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372426 kubelet[3171]: E1216 13:06:10.372421 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.372609 kubelet[3171]: E1216 13:06:10.372588 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372609 kubelet[3171]: W1216 13:06:10.372605 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372677 kubelet[3171]: E1216 13:06:10.372613 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.372800 kubelet[3171]: E1216 13:06:10.372792 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.372853 kubelet[3171]: W1216 13:06:10.372836 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.372853 kubelet[3171]: E1216 13:06:10.372848 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373003 kubelet[3171]: E1216 13:06:10.372978 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373003 kubelet[3171]: W1216 13:06:10.373001 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373058 kubelet[3171]: E1216 13:06:10.373009 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373123 kubelet[3171]: E1216 13:06:10.373112 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373123 kubelet[3171]: W1216 13:06:10.373119 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373165 kubelet[3171]: E1216 13:06:10.373126 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373291 kubelet[3171]: E1216 13:06:10.373267 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373291 kubelet[3171]: W1216 13:06:10.373287 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373340 kubelet[3171]: E1216 13:06:10.373293 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373505 kubelet[3171]: E1216 13:06:10.373381 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373505 kubelet[3171]: W1216 13:06:10.373387 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373505 kubelet[3171]: E1216 13:06:10.373393 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373505 kubelet[3171]: E1216 13:06:10.373467 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373505 kubelet[3171]: W1216 13:06:10.373471 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373505 kubelet[3171]: E1216 13:06:10.373476 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373668 kubelet[3171]: E1216 13:06:10.373599 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373668 kubelet[3171]: W1216 13:06:10.373604 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373668 kubelet[3171]: E1216 13:06:10.373611 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373730 kubelet[3171]: E1216 13:06:10.373708 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373730 kubelet[3171]: W1216 13:06:10.373712 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373730 kubelet[3171]: E1216 13:06:10.373718 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373806 kubelet[3171]: E1216 13:06:10.373800 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373827 kubelet[3171]: W1216 13:06:10.373806 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.373827 kubelet[3171]: E1216 13:06:10.373811 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.373947 kubelet[3171]: E1216 13:06:10.373928 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.373947 kubelet[3171]: W1216 13:06:10.373944 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.374015 kubelet[3171]: E1216 13:06:10.373951 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:10.374628 kubelet[3171]: E1216 13:06:10.374605 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:06:10.374628 kubelet[3171]: W1216 13:06:10.374621 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:06:10.374719 kubelet[3171]: E1216 13:06:10.374634 3171 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:06:11.118040 containerd[1705]: time="2025-12-16T13:06:11.117993465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:11.120715 containerd[1705]: time="2025-12-16T13:06:11.120679645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:06:11.123436 containerd[1705]: time="2025-12-16T13:06:11.123403835Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:11.126424 containerd[1705]: time="2025-12-16T13:06:11.126397832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:11.126839 containerd[1705]: time="2025-12-16T13:06:11.126814615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.689050234s" Dec 16 13:06:11.126886 containerd[1705]: time="2025-12-16T13:06:11.126847783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:06:11.133762 containerd[1705]: time="2025-12-16T13:06:11.133727984Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:06:11.157293 containerd[1705]: time="2025-12-16T13:06:11.153388962Z" level=info msg="Container f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:11.172446 containerd[1705]: time="2025-12-16T13:06:11.172416502Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf\"" Dec 16 13:06:11.173051 containerd[1705]: time="2025-12-16T13:06:11.173025519Z" level=info msg="StartContainer for \"f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf\"" Dec 16 13:06:11.174322 containerd[1705]: time="2025-12-16T13:06:11.174288943Z" level=info msg="connecting to shim f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf" address="unix:///run/containerd/s/af31f412639d8e9f6ec9f8f805c9dd5334e4418c7be66705ff6194339b53d68d" protocol=ttrpc version=3 Dec 16 13:06:11.197642 systemd[1]: Started cri-containerd-f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf.scope - libcontainer container f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf. Dec 16 13:06:11.256370 containerd[1705]: time="2025-12-16T13:06:11.256331871Z" level=info msg="StartContainer for \"f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf\" returns successfully" Dec 16 13:06:11.260628 systemd[1]: cri-containerd-f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf.scope: Deactivated successfully. Dec 16 13:06:11.263921 containerd[1705]: time="2025-12-16T13:06:11.263888211Z" level=info msg="received container exit event container_id:\"f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf\" id:\"f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf\" pid:3866 exited_at:{seconds:1765890371 nanos:263453917}" Dec 16 13:06:11.291356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2f72187970c588145cae9fe0029896a5f612567f255815cd5b1e10e26cd7dbf-rootfs.mount: Deactivated successfully. Dec 16 13:06:11.322038 kubelet[3171]: I1216 13:06:11.322018 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:06:12.250839 kubelet[3171]: E1216 13:06:12.250796 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:14.250904 kubelet[3171]: E1216 13:06:14.250842 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:14.329666 containerd[1705]: time="2025-12-16T13:06:14.329608287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:06:16.251405 kubelet[3171]: E1216 13:06:16.251337 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:18.251631 kubelet[3171]: E1216 13:06:18.251566 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:18.381290 containerd[1705]: time="2025-12-16T13:06:18.381247434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.383641 containerd[1705]: time="2025-12-16T13:06:18.383605075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:06:18.391470 containerd[1705]: time="2025-12-16T13:06:18.391378728Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.395723 containerd[1705]: time="2025-12-16T13:06:18.395134481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:18.395723 containerd[1705]: time="2025-12-16T13:06:18.395633081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.065560677s" Dec 16 13:06:18.395723 containerd[1705]: time="2025-12-16T13:06:18.395657163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:06:18.402992 containerd[1705]: time="2025-12-16T13:06:18.402963505Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:06:18.429806 containerd[1705]: time="2025-12-16T13:06:18.429771348Z" level=info msg="Container 0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:18.444768 containerd[1705]: time="2025-12-16T13:06:18.444738111Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa\"" Dec 16 13:06:18.445368 containerd[1705]: time="2025-12-16T13:06:18.445216837Z" level=info msg="StartContainer for \"0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa\"" Dec 16 13:06:18.446977 containerd[1705]: time="2025-12-16T13:06:18.446938500Z" level=info msg="connecting to shim 0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa" address="unix:///run/containerd/s/af31f412639d8e9f6ec9f8f805c9dd5334e4418c7be66705ff6194339b53d68d" protocol=ttrpc version=3 Dec 16 13:06:18.468643 systemd[1]: Started cri-containerd-0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa.scope - libcontainer container 0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa. Dec 16 13:06:18.533735 containerd[1705]: time="2025-12-16T13:06:18.533592461Z" level=info msg="StartContainer for \"0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa\" returns successfully" Dec 16 13:06:19.815949 containerd[1705]: time="2025-12-16T13:06:19.815891466Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:06:19.819207 systemd[1]: cri-containerd-0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa.scope: Deactivated successfully. Dec 16 13:06:19.820016 systemd[1]: cri-containerd-0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa.scope: Consumed 437ms CPU time, 194.5M memory peak, 171.3M written to disk. Dec 16 13:06:19.822749 containerd[1705]: time="2025-12-16T13:06:19.822713141Z" level=info msg="received container exit event container_id:\"0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa\" id:\"0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa\" pid:3926 exited_at:{seconds:1765890379 nanos:822314622}" Dec 16 13:06:19.828862 kubelet[3171]: I1216 13:06:19.828843 3171 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:06:19.851072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b1a152cb922e8641e1adf516b10d4cb19a7746d811bc6ec6b9f2bcd1703e6fa-rootfs.mount: Deactivated successfully. Dec 16 13:06:20.133694 kubelet[3171]: I1216 13:06:20.133654 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/320e67c6-de90-4268-be41-844d88ae2859-tigera-ca-bundle\") pod \"calico-kube-controllers-6df6cd5b4c-5j8sl\" (UID: \"320e67c6-de90-4268-be41-844d88ae2859\") " pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" Dec 16 13:06:20.133694 kubelet[3171]: I1216 13:06:20.133695 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6gf\" (UniqueName: \"kubernetes.io/projected/320e67c6-de90-4268-be41-844d88ae2859-kube-api-access-zz6gf\") pod \"calico-kube-controllers-6df6cd5b4c-5j8sl\" (UID: \"320e67c6-de90-4268-be41-844d88ae2859\") " pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" Dec 16 13:06:20.241813 systemd[1]: Created slice kubepods-besteffort-pod320e67c6_de90_4268_be41_844d88ae2859.slice - libcontainer container kubepods-besteffort-pod320e67c6_de90_4268_be41_844d88ae2859.slice. Dec 16 13:06:20.304445 systemd[1]: Created slice kubepods-besteffort-podf6e8a896_34e6_4faa_8d21_a2f273b7f6d7.slice - libcontainer container kubepods-besteffort-podf6e8a896_34e6_4faa_8d21_a2f273b7f6d7.slice. Dec 16 13:06:20.336213 kubelet[3171]: I1216 13:06:20.336182 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e-config-volume\") pod \"coredns-674b8bbfcf-sj4bl\" (UID: \"1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e\") " pod="kube-system/coredns-674b8bbfcf-sj4bl" Dec 16 13:06:20.336372 kubelet[3171]: I1216 13:06:20.336246 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6e8a896-34e6-4faa-8d21-a2f273b7f6d7-calico-apiserver-certs\") pod \"calico-apiserver-8684c6f77-vmt4x\" (UID: \"f6e8a896-34e6-4faa-8d21-a2f273b7f6d7\") " pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" Dec 16 13:06:20.336372 kubelet[3171]: I1216 13:06:20.336264 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvgd\" (UniqueName: \"kubernetes.io/projected/f6e8a896-34e6-4faa-8d21-a2f273b7f6d7-kube-api-access-bcvgd\") pod \"calico-apiserver-8684c6f77-vmt4x\" (UID: \"f6e8a896-34e6-4faa-8d21-a2f273b7f6d7\") " pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" Dec 16 13:06:20.336372 kubelet[3171]: I1216 13:06:20.336283 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e-kube-api-access-bxpcx\") pod \"coredns-674b8bbfcf-sj4bl\" (UID: \"1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e\") " pod="kube-system/coredns-674b8bbfcf-sj4bl" Dec 16 13:06:20.586672 kubelet[3171]: E1216 13:06:20.437127 3171 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Dec 16 13:06:20.586672 kubelet[3171]: E1216 13:06:20.437228 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e-config-volume podName:1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e nodeName:}" failed. No retries permitted until 2025-12-16 13:06:20.937209346 +0000 UTC m=+33.795426981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e-config-volume") pod "coredns-674b8bbfcf-sj4bl" (UID: "1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e") : object "kube-system"/"coredns" not registered Dec 16 13:06:20.591679 containerd[1705]: time="2025-12-16T13:06:20.591331809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df6cd5b4c-5j8sl,Uid:320e67c6-de90-4268-be41-844d88ae2859,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:20.607641 containerd[1705]: time="2025-12-16T13:06:20.607613673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-vmt4x,Uid:f6e8a896-34e6-4faa-8d21-a2f273b7f6d7,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:20.633002 systemd[1]: Created slice kubepods-burstable-pod1cdba7b0_c9eb_4700_947f_b2cdf71c9b5e.slice - libcontainer container kubepods-burstable-pod1cdba7b0_c9eb_4700_947f_b2cdf71c9b5e.slice. Dec 16 13:06:20.738949 kubelet[3171]: I1216 13:06:20.738866 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2db3c056-ec93-4d31-a2ac-714fd98c713a-calico-apiserver-certs\") pod \"calico-apiserver-8684c6f77-qprp2\" (UID: \"2db3c056-ec93-4d31-a2ac-714fd98c713a\") " pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" Dec 16 13:06:20.738949 kubelet[3171]: I1216 13:06:20.738906 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds9r\" (UniqueName: \"kubernetes.io/projected/2db3c056-ec93-4d31-a2ac-714fd98c713a-kube-api-access-cds9r\") pod \"calico-apiserver-8684c6f77-qprp2\" (UID: \"2db3c056-ec93-4d31-a2ac-714fd98c713a\") " pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" Dec 16 13:06:20.807513 systemd[1]: Created slice kubepods-besteffort-podf3c338af_232e_46f3_9597_30d05ba9e1ec.slice - libcontainer container kubepods-besteffort-podf3c338af_232e_46f3_9597_30d05ba9e1ec.slice. Dec 16 13:06:20.814597 containerd[1705]: time="2025-12-16T13:06:20.814515664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jf8nl,Uid:f3c338af-232e-46f3-9597-30d05ba9e1ec,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:20.817195 systemd[1]: Created slice kubepods-besteffort-pod2db3c056_ec93_4d31_a2ac_714fd98c713a.slice - libcontainer container kubepods-besteffort-pod2db3c056_ec93_4d31_a2ac_714fd98c713a.slice. Dec 16 13:06:20.831345 systemd[1]: Created slice kubepods-besteffort-pod88a83a1e_eebc_46e8_9426_3fba3e5c071e.slice - libcontainer container kubepods-besteffort-pod88a83a1e_eebc_46e8_9426_3fba3e5c071e.slice. Dec 16 13:06:20.840972 kubelet[3171]: I1216 13:06:20.839418 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84dh\" (UniqueName: \"kubernetes.io/projected/af747593-5d32-4fae-9721-9dc450415076-kube-api-access-v84dh\") pod \"whisker-66d757d6cb-pqrrh\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " pod="calico-system/whisker-66d757d6cb-pqrrh" Dec 16 13:06:20.840972 kubelet[3171]: I1216 13:06:20.839458 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88a83a1e-eebc-46e8-9426-3fba3e5c071e-goldmane-ca-bundle\") pod \"goldmane-666569f655-rqstk\" (UID: \"88a83a1e-eebc-46e8-9426-3fba3e5c071e\") " pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:20.840972 kubelet[3171]: I1216 13:06:20.839527 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af747593-5d32-4fae-9721-9dc450415076-whisker-ca-bundle\") pod \"whisker-66d757d6cb-pqrrh\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " pod="calico-system/whisker-66d757d6cb-pqrrh" Dec 16 13:06:20.840972 kubelet[3171]: I1216 13:06:20.839547 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fph8x\" (UniqueName: \"kubernetes.io/projected/1bc506a6-247b-4b74-9be4-71a7824366f8-kube-api-access-fph8x\") pod \"coredns-674b8bbfcf-l9vzc\" (UID: \"1bc506a6-247b-4b74-9be4-71a7824366f8\") " pod="kube-system/coredns-674b8bbfcf-l9vzc" Dec 16 13:06:20.840972 kubelet[3171]: I1216 13:06:20.839582 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/88a83a1e-eebc-46e8-9426-3fba3e5c071e-goldmane-key-pair\") pod \"goldmane-666569f655-rqstk\" (UID: \"88a83a1e-eebc-46e8-9426-3fba3e5c071e\") " pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:20.840117 systemd[1]: Created slice kubepods-besteffort-podaf747593_5d32_4fae_9721_9dc450415076.slice - libcontainer container kubepods-besteffort-podaf747593_5d32_4fae_9721_9dc450415076.slice. Dec 16 13:06:20.841381 kubelet[3171]: I1216 13:06:20.839609 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfz5\" (UniqueName: \"kubernetes.io/projected/88a83a1e-eebc-46e8-9426-3fba3e5c071e-kube-api-access-qpfz5\") pod \"goldmane-666569f655-rqstk\" (UID: \"88a83a1e-eebc-46e8-9426-3fba3e5c071e\") " pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:20.841381 kubelet[3171]: I1216 13:06:20.839626 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af747593-5d32-4fae-9721-9dc450415076-whisker-backend-key-pair\") pod \"whisker-66d757d6cb-pqrrh\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " pod="calico-system/whisker-66d757d6cb-pqrrh" Dec 16 13:06:20.841381 kubelet[3171]: I1216 13:06:20.839644 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bc506a6-247b-4b74-9be4-71a7824366f8-config-volume\") pod \"coredns-674b8bbfcf-l9vzc\" (UID: \"1bc506a6-247b-4b74-9be4-71a7824366f8\") " pod="kube-system/coredns-674b8bbfcf-l9vzc" Dec 16 13:06:20.841381 kubelet[3171]: I1216 13:06:20.840108 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88a83a1e-eebc-46e8-9426-3fba3e5c071e-config\") pod \"goldmane-666569f655-rqstk\" (UID: \"88a83a1e-eebc-46e8-9426-3fba3e5c071e\") " pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:20.867765 systemd[1]: Created slice kubepods-burstable-pod1bc506a6_247b_4b74_9be4_71a7824366f8.slice - libcontainer container kubepods-burstable-pod1bc506a6_247b_4b74_9be4_71a7824366f8.slice. Dec 16 13:06:20.981509 containerd[1705]: time="2025-12-16T13:06:20.981384415Z" level=error msg="Failed to destroy network for sandbox \"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.984223 containerd[1705]: time="2025-12-16T13:06:20.984180429Z" level=error msg="Failed to destroy network for sandbox \"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.984728 containerd[1705]: time="2025-12-16T13:06:20.984701105Z" level=error msg="Failed to destroy network for sandbox \"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.985449 containerd[1705]: time="2025-12-16T13:06:20.985408782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-vmt4x,Uid:f6e8a896-34e6-4faa-8d21-a2f273b7f6d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.985693 kubelet[3171]: E1216 13:06:20.985653 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.985750 kubelet[3171]: E1216 13:06:20.985731 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" Dec 16 13:06:20.985774 kubelet[3171]: E1216 13:06:20.985754 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" Dec 16 13:06:20.985874 kubelet[3171]: E1216 13:06:20.985852 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2709333af1431538e6021843bf2d529a057a65a45f61cbf9790467e2f8dcaf4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:06:20.988427 containerd[1705]: time="2025-12-16T13:06:20.988371510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jf8nl,Uid:f3c338af-232e-46f3-9597-30d05ba9e1ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.989003 kubelet[3171]: E1216 13:06:20.988976 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.989159 kubelet[3171]: E1216 13:06:20.989144 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:20.989256 kubelet[3171]: E1216 13:06:20.989234 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jf8nl" Dec 16 13:06:20.989455 kubelet[3171]: E1216 13:06:20.989414 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"612c482a7868dba5258f73ab2565e35f39de6eca134e0c1da7ea9feba342fd55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:20.991530 containerd[1705]: time="2025-12-16T13:06:20.991473078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df6cd5b4c-5j8sl,Uid:320e67c6-de90-4268-be41-844d88ae2859,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.992008 kubelet[3171]: E1216 13:06:20.991658 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:20.992008 kubelet[3171]: E1216 13:06:20.991724 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" Dec 16 13:06:20.992008 kubelet[3171]: E1216 13:06:20.991745 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" Dec 16 13:06:20.992144 kubelet[3171]: E1216 13:06:20.991789 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6d522ecc7ff3f7bd23582aef2491a133b50542b1f87ff0a37ebe8b2fae2a8a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:06:21.128306 containerd[1705]: time="2025-12-16T13:06:21.128268427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-qprp2,Uid:2db3c056-ec93-4d31-a2ac-714fd98c713a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:21.138525 containerd[1705]: time="2025-12-16T13:06:21.138151873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rqstk,Uid:88a83a1e-eebc-46e8-9426-3fba3e5c071e,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:21.153264 containerd[1705]: time="2025-12-16T13:06:21.153235068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d757d6cb-pqrrh,Uid:af747593-5d32-4fae-9721-9dc450415076,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:21.172869 containerd[1705]: time="2025-12-16T13:06:21.172837745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9vzc,Uid:1bc506a6-247b-4b74-9be4-71a7824366f8,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:21.229328 containerd[1705]: time="2025-12-16T13:06:21.229222837Z" level=error msg="Failed to destroy network for sandbox \"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.232418 containerd[1705]: time="2025-12-16T13:06:21.232377060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-qprp2,Uid:2db3c056-ec93-4d31-a2ac-714fd98c713a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.234071 kubelet[3171]: E1216 13:06:21.233066 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.234071 kubelet[3171]: E1216 13:06:21.233129 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" Dec 16 13:06:21.234071 kubelet[3171]: E1216 13:06:21.233153 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" Dec 16 13:06:21.234222 kubelet[3171]: E1216 13:06:21.233208 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c83f472eb0a7333b88f9b243f35a60482be80b784279c1a4b8eb6832eebd426\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:06:21.236925 containerd[1705]: time="2025-12-16T13:06:21.236900294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sj4bl,Uid:1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:21.242862 containerd[1705]: time="2025-12-16T13:06:21.242820896Z" level=error msg="Failed to destroy network for sandbox \"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.248686 containerd[1705]: time="2025-12-16T13:06:21.248615797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rqstk,Uid:88a83a1e-eebc-46e8-9426-3fba3e5c071e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.249406 containerd[1705]: time="2025-12-16T13:06:21.249375582Z" level=error msg="Failed to destroy network for sandbox \"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.250562 kubelet[3171]: E1216 13:06:21.250532 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.250637 kubelet[3171]: E1216 13:06:21.250601 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:21.250667 kubelet[3171]: E1216 13:06:21.250628 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rqstk" Dec 16 13:06:21.250747 kubelet[3171]: E1216 13:06:21.250718 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3da04220f2e845d01e669d57bc77aa712e0985b8db15a17104bff8d8528f0eaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:06:21.253957 containerd[1705]: time="2025-12-16T13:06:21.253922785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d757d6cb-pqrrh,Uid:af747593-5d32-4fae-9721-9dc450415076,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.254635 kubelet[3171]: E1216 13:06:21.254605 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.255212 kubelet[3171]: E1216 13:06:21.254733 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d757d6cb-pqrrh" Dec 16 13:06:21.255212 kubelet[3171]: E1216 13:06:21.254759 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d757d6cb-pqrrh" Dec 16 13:06:21.255212 kubelet[3171]: E1216 13:06:21.254813 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66d757d6cb-pqrrh_calico-system(af747593-5d32-4fae-9721-9dc450415076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66d757d6cb-pqrrh_calico-system(af747593-5d32-4fae-9721-9dc450415076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d4d0f4279e33030a5153c214eceb8e209d3ae924360749dfb84be30b4e9cb67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66d757d6cb-pqrrh" podUID="af747593-5d32-4fae-9721-9dc450415076" Dec 16 13:06:21.268181 containerd[1705]: time="2025-12-16T13:06:21.268149110Z" level=error msg="Failed to destroy network for sandbox \"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.271103 containerd[1705]: time="2025-12-16T13:06:21.271047913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9vzc,Uid:1bc506a6-247b-4b74-9be4-71a7824366f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.271238 kubelet[3171]: E1216 13:06:21.271209 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.271278 kubelet[3171]: E1216 13:06:21.271249 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9vzc" Dec 16 13:06:21.271278 kubelet[3171]: E1216 13:06:21.271268 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l9vzc" Dec 16 13:06:21.271340 kubelet[3171]: E1216 13:06:21.271308 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l9vzc_kube-system(1bc506a6-247b-4b74-9be4-71a7824366f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l9vzc_kube-system(1bc506a6-247b-4b74-9be4-71a7824366f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9982f712c10742497109a71c81bafec04face2feda536c6d6d3768b74de3c57d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l9vzc" podUID="1bc506a6-247b-4b74-9be4-71a7824366f8" Dec 16 13:06:21.294535 containerd[1705]: time="2025-12-16T13:06:21.294506308Z" level=error msg="Failed to destroy network for sandbox \"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.297232 containerd[1705]: time="2025-12-16T13:06:21.297165877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sj4bl,Uid:1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.297373 kubelet[3171]: E1216 13:06:21.297329 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:06:21.297416 kubelet[3171]: E1216 13:06:21.297388 3171 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sj4bl" Dec 16 13:06:21.297443 kubelet[3171]: E1216 13:06:21.297408 3171 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sj4bl" Dec 16 13:06:21.297521 kubelet[3171]: E1216 13:06:21.297460 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sj4bl_kube-system(1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sj4bl_kube-system(1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c53f0f53398a18355bea48ee8f0b2b6e6d24ab5f81560076bca357aed7459bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sj4bl" podUID="1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e" Dec 16 13:06:21.346274 containerd[1705]: time="2025-12-16T13:06:21.346173958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:06:21.852976 systemd[1]: run-netns-cni\x2d3b233cc2\x2d7339\x2d1f50\x2dab0f\x2d0b2c2caca841.mount: Deactivated successfully. Dec 16 13:06:21.853069 systemd[1]: run-netns-cni\x2d98d25bd8\x2d175d\x2d9833\x2d2d27\x2dea6ea55b5887.mount: Deactivated successfully. Dec 16 13:06:21.853125 systemd[1]: run-netns-cni\x2d79386950\x2dba87\x2d2aeb\x2d41c3\x2da06ec9ad7191.mount: Deactivated successfully. Dec 16 13:06:26.710952 kubelet[3171]: I1216 13:06:26.710912 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:06:28.403433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126671027.mount: Deactivated successfully. Dec 16 13:06:28.440938 containerd[1705]: time="2025-12-16T13:06:28.440893615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:28.443371 containerd[1705]: time="2025-12-16T13:06:28.443271049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:06:28.445832 containerd[1705]: time="2025-12-16T13:06:28.445803747Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:28.449378 containerd[1705]: time="2025-12-16T13:06:28.449333846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:28.450061 containerd[1705]: time="2025-12-16T13:06:28.449714225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.103251741s" Dec 16 13:06:28.450061 containerd[1705]: time="2025-12-16T13:06:28.449744622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:06:28.471235 containerd[1705]: time="2025-12-16T13:06:28.471206568Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:06:28.496572 containerd[1705]: time="2025-12-16T13:06:28.495591482Z" level=info msg="Container f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:28.536113 containerd[1705]: time="2025-12-16T13:06:28.536082161Z" level=info msg="CreateContainer within sandbox \"55cf2aaacab75b138520dcf6cfa8b23ba12e28845bf1d58862b616effbb1082e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31\"" Dec 16 13:06:28.536491 containerd[1705]: time="2025-12-16T13:06:28.536462931Z" level=info msg="StartContainer for \"f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31\"" Dec 16 13:06:28.538781 containerd[1705]: time="2025-12-16T13:06:28.538744208Z" level=info msg="connecting to shim f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31" address="unix:///run/containerd/s/af31f412639d8e9f6ec9f8f805c9dd5334e4418c7be66705ff6194339b53d68d" protocol=ttrpc version=3 Dec 16 13:06:28.554674 systemd[1]: Started cri-containerd-f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31.scope - libcontainer container f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31. Dec 16 13:06:28.642792 containerd[1705]: time="2025-12-16T13:06:28.642757078Z" level=info msg="StartContainer for \"f4f8229fba98094d7cb3ba50850d43fef848c94521d83e9fe11d4cb48f4aee31\" returns successfully" Dec 16 13:06:29.026789 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:06:29.026910 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:06:29.191360 kubelet[3171]: I1216 13:06:29.190793 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af747593-5d32-4fae-9721-9dc450415076-whisker-ca-bundle\") pod \"af747593-5d32-4fae-9721-9dc450415076\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " Dec 16 13:06:29.191360 kubelet[3171]: I1216 13:06:29.190836 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84dh\" (UniqueName: \"kubernetes.io/projected/af747593-5d32-4fae-9721-9dc450415076-kube-api-access-v84dh\") pod \"af747593-5d32-4fae-9721-9dc450415076\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " Dec 16 13:06:29.191360 kubelet[3171]: I1216 13:06:29.190865 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af747593-5d32-4fae-9721-9dc450415076-whisker-backend-key-pair\") pod \"af747593-5d32-4fae-9721-9dc450415076\" (UID: \"af747593-5d32-4fae-9721-9dc450415076\") " Dec 16 13:06:29.192841 kubelet[3171]: I1216 13:06:29.192725 3171 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af747593-5d32-4fae-9721-9dc450415076-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "af747593-5d32-4fae-9721-9dc450415076" (UID: "af747593-5d32-4fae-9721-9dc450415076"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:06:29.194675 kubelet[3171]: I1216 13:06:29.194639 3171 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af747593-5d32-4fae-9721-9dc450415076-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "af747593-5d32-4fae-9721-9dc450415076" (UID: "af747593-5d32-4fae-9721-9dc450415076"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:06:29.197985 kubelet[3171]: I1216 13:06:29.197957 3171 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af747593-5d32-4fae-9721-9dc450415076-kube-api-access-v84dh" (OuterVolumeSpecName: "kube-api-access-v84dh") pod "af747593-5d32-4fae-9721-9dc450415076" (UID: "af747593-5d32-4fae-9721-9dc450415076"). InnerVolumeSpecName "kube-api-access-v84dh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:06:29.257872 systemd[1]: Removed slice kubepods-besteffort-podaf747593_5d32_4fae_9721_9dc450415076.slice - libcontainer container kubepods-besteffort-podaf747593_5d32_4fae_9721_9dc450415076.slice. Dec 16 13:06:29.291609 kubelet[3171]: I1216 13:06:29.291513 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af747593-5d32-4fae-9721-9dc450415076-whisker-ca-bundle\") on node \"ci-4459.2.2-a-efe6a0b1f4\" DevicePath \"\"" Dec 16 13:06:29.291742 kubelet[3171]: I1216 13:06:29.291540 3171 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v84dh\" (UniqueName: \"kubernetes.io/projected/af747593-5d32-4fae-9721-9dc450415076-kube-api-access-v84dh\") on node \"ci-4459.2.2-a-efe6a0b1f4\" DevicePath \"\"" Dec 16 13:06:29.291742 kubelet[3171]: I1216 13:06:29.291722 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af747593-5d32-4fae-9721-9dc450415076-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-efe6a0b1f4\" DevicePath \"\"" Dec 16 13:06:29.396498 kubelet[3171]: I1216 13:06:29.396336 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dq44b" podStartSLOduration=1.6665972820000001 podStartE2EDuration="23.396320128s" podCreationTimestamp="2025-12-16 13:06:06 +0000 UTC" firstStartedPulling="2025-12-16 13:06:06.72069076 +0000 UTC m=+19.578908406" lastFinishedPulling="2025-12-16 13:06:28.450413606 +0000 UTC m=+41.308631252" observedRunningTime="2025-12-16 13:06:29.384856363 +0000 UTC m=+42.243074026" watchObservedRunningTime="2025-12-16 13:06:29.396320128 +0000 UTC m=+42.254537777" Dec 16 13:06:29.403782 systemd[1]: var-lib-kubelet-pods-af747593\x2d5d32\x2d4fae\x2d9721\x2d9dc450415076-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv84dh.mount: Deactivated successfully. Dec 16 13:06:29.403874 systemd[1]: var-lib-kubelet-pods-af747593\x2d5d32\x2d4fae\x2d9721\x2d9dc450415076-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:06:29.458326 systemd[1]: Created slice kubepods-besteffort-pod8813b7f5_162a_48c4_adf1_e3bb0aa1a8c9.slice - libcontainer container kubepods-besteffort-pod8813b7f5_162a_48c4_adf1_e3bb0aa1a8c9.slice. Dec 16 13:06:29.493990 kubelet[3171]: I1216 13:06:29.493928 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9-whisker-ca-bundle\") pod \"whisker-59878fbb86-xz6hr\" (UID: \"8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9\") " pod="calico-system/whisker-59878fbb86-xz6hr" Dec 16 13:06:29.494120 kubelet[3171]: I1216 13:06:29.493997 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9-whisker-backend-key-pair\") pod \"whisker-59878fbb86-xz6hr\" (UID: \"8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9\") " pod="calico-system/whisker-59878fbb86-xz6hr" Dec 16 13:06:29.494120 kubelet[3171]: I1216 13:06:29.494033 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvzp\" (UniqueName: \"kubernetes.io/projected/8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9-kube-api-access-wjvzp\") pod \"whisker-59878fbb86-xz6hr\" (UID: \"8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9\") " pod="calico-system/whisker-59878fbb86-xz6hr" Dec 16 13:06:29.761903 containerd[1705]: time="2025-12-16T13:06:29.761849462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59878fbb86-xz6hr,Uid:8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:29.853414 systemd-networkd[1337]: calib25d1e4b3e7: Link UP Dec 16 13:06:29.855095 systemd-networkd[1337]: calib25d1e4b3e7: Gained carrier Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.787 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.797 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0 whisker-59878fbb86- calico-system 8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9 927 0 2025-12-16 13:06:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59878fbb86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 whisker-59878fbb86-xz6hr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib25d1e4b3e7 [] [] }} ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.797 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.817 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" HandleID="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.817 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" HandleID="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"whisker-59878fbb86-xz6hr", "timestamp":"2025-12-16 13:06:29.81709813 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.817 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.817 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.817 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.822 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.825 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.829 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.831 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.832 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.832 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.833 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76 Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.837 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.844 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.844 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:29.868789 containerd[1705]: 2025-12-16 13:06:29.844 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.844 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" HandleID="k8s-pod-network.d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.847 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0", GenerateName:"whisker-59878fbb86-", Namespace:"calico-system", SelfLink:"", UID:"8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59878fbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"whisker-59878fbb86-xz6hr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib25d1e4b3e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.847 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.129/32] ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.847 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib25d1e4b3e7 ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.852 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.869370 containerd[1705]: 2025-12-16 13:06:29.852 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0", GenerateName:"whisker-59878fbb86-", Namespace:"calico-system", SelfLink:"", UID:"8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59878fbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76", Pod:"whisker-59878fbb86-xz6hr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib25d1e4b3e7", MAC:"fe:f9:3e:b1:77:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:29.869669 containerd[1705]: 2025-12-16 13:06:29.867 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" Namespace="calico-system" Pod="whisker-59878fbb86-xz6hr" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-whisker--59878fbb86--xz6hr-eth0" Dec 16 13:06:29.913057 containerd[1705]: time="2025-12-16T13:06:29.912933293Z" level=info msg="connecting to shim d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76" address="unix:///run/containerd/s/db43fc83d410469bd2dbf78fdaf91e83cee2d940f1186618121a572d1d3ee5fe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:29.934639 systemd[1]: Started cri-containerd-d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76.scope - libcontainer container d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76. Dec 16 13:06:29.972959 containerd[1705]: time="2025-12-16T13:06:29.972925099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59878fbb86-xz6hr,Uid:8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d01f011b84d68126cbb3035af8f291d1be8cc77c288c430a7cc6439c4543bf76\"" Dec 16 13:06:29.974114 containerd[1705]: time="2025-12-16T13:06:29.974082972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:30.309510 containerd[1705]: time="2025-12-16T13:06:30.307753026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:30.310610 containerd[1705]: time="2025-12-16T13:06:30.310556018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:30.310702 containerd[1705]: time="2025-12-16T13:06:30.310675323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:06:30.310885 kubelet[3171]: E1216 13:06:30.310850 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:30.311551 kubelet[3171]: E1216 13:06:30.311513 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:30.311755 kubelet[3171]: E1216 13:06:30.311718 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7be21a1e0aeb4a608313b502cd783836,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:30.314708 containerd[1705]: time="2025-12-16T13:06:30.314679659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:30.712000 containerd[1705]: time="2025-12-16T13:06:30.711863311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:30.715046 containerd[1705]: time="2025-12-16T13:06:30.714939646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:30.715046 containerd[1705]: time="2025-12-16T13:06:30.715024841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:06:30.715439 kubelet[3171]: E1216 13:06:30.715356 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:30.715439 kubelet[3171]: E1216 13:06:30.715422 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:30.715749 kubelet[3171]: E1216 13:06:30.715711 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:30.718300 kubelet[3171]: E1216 13:06:30.718230 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:06:30.890891 systemd-networkd[1337]: vxlan.calico: Link UP Dec 16 13:06:30.890899 systemd-networkd[1337]: vxlan.calico: Gained carrier Dec 16 13:06:31.254477 kubelet[3171]: I1216 13:06:31.254434 3171 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af747593-5d32-4fae-9721-9dc450415076" path="/var/lib/kubelet/pods/af747593-5d32-4fae-9721-9dc450415076/volumes" Dec 16 13:06:31.372227 kubelet[3171]: E1216 13:06:31.372181 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:06:31.626639 systemd-networkd[1337]: calib25d1e4b3e7: Gained IPv6LL Dec 16 13:06:32.202627 systemd-networkd[1337]: vxlan.calico: Gained IPv6LL Dec 16 13:06:32.251706 containerd[1705]: time="2025-12-16T13:06:32.251658165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-qprp2,Uid:2db3c056-ec93-4d31-a2ac-714fd98c713a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:32.252051 containerd[1705]: time="2025-12-16T13:06:32.251655568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9vzc,Uid:1bc506a6-247b-4b74-9be4-71a7824366f8,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:32.395217 systemd-networkd[1337]: cali91a93e18196: Link UP Dec 16 13:06:32.398256 systemd-networkd[1337]: cali91a93e18196: Gained carrier Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.317 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0 coredns-674b8bbfcf- kube-system 1bc506a6-247b-4b74-9be4-71a7824366f8 857 0 2025-12-16 13:05:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 coredns-674b8bbfcf-l9vzc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali91a93e18196 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.317 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.346 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" HandleID="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.346 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" HandleID="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5150), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"coredns-674b8bbfcf-l9vzc", "timestamp":"2025-12-16 13:06:32.346740572 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.347 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.347 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.347 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.354 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.357 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.362 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.365 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.367 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.367 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.369 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.373 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.382 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.382 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.382 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:32.413302 containerd[1705]: 2025-12-16 13:06:32.383 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" HandleID="k8s-pod-network.f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.415283 containerd[1705]: 2025-12-16 13:06:32.389 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1bc506a6-247b-4b74-9be4-71a7824366f8", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"coredns-674b8bbfcf-l9vzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91a93e18196", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:32.415283 containerd[1705]: 2025-12-16 13:06:32.390 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.130/32] ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.415283 containerd[1705]: 2025-12-16 13:06:32.390 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91a93e18196 ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.415283 containerd[1705]: 2025-12-16 13:06:32.397 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.415468 containerd[1705]: 2025-12-16 13:06:32.397 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1bc506a6-247b-4b74-9be4-71a7824366f8", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe", Pod:"coredns-674b8bbfcf-l9vzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91a93e18196", MAC:"26:dc:d3:50:00:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:32.415468 containerd[1705]: 2025-12-16 13:06:32.411 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-l9vzc" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--l9vzc-eth0" Dec 16 13:06:32.462740 containerd[1705]: time="2025-12-16T13:06:32.461638184Z" level=info msg="connecting to shim f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe" address="unix:///run/containerd/s/fddd98885fe2a17e1e043c4e9c97f5a7bc19fb431e62f32bf4f65ef30874591f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:32.490620 systemd[1]: Started cri-containerd-f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe.scope - libcontainer container f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe. Dec 16 13:06:32.521610 systemd-networkd[1337]: caliaf8bd0018e0: Link UP Dec 16 13:06:32.522711 systemd-networkd[1337]: caliaf8bd0018e0: Gained carrier Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.319 [INFO][4518] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0 calico-apiserver-8684c6f77- calico-apiserver 2db3c056-ec93-4d31-a2ac-714fd98c713a 854 0 2025-12-16 13:06:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8684c6f77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 calico-apiserver-8684c6f77-qprp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf8bd0018e0 [] [] }} ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.319 [INFO][4518] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.351 [INFO][4546] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" HandleID="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.351 [INFO][4546] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" HandleID="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"calico-apiserver-8684c6f77-qprp2", "timestamp":"2025-12-16 13:06:32.351635331 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.352 [INFO][4546] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.383 [INFO][4546] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.383 [INFO][4546] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.457 [INFO][4546] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.468 [INFO][4546] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.477 [INFO][4546] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.481 [INFO][4546] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.493 [INFO][4546] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.493 [INFO][4546] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.495 [INFO][4546] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.501 [INFO][4546] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.511 [INFO][4546] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.512 [INFO][4546] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:32.550447 containerd[1705]: 2025-12-16 13:06:32.512 [INFO][4546] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:32.552280 containerd[1705]: 2025-12-16 13:06:32.512 [INFO][4546] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" HandleID="k8s-pod-network.efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.552280 containerd[1705]: 2025-12-16 13:06:32.516 [INFO][4518] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0", GenerateName:"calico-apiserver-8684c6f77-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db3c056-ec93-4d31-a2ac-714fd98c713a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8684c6f77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"calico-apiserver-8684c6f77-qprp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf8bd0018e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:32.552280 containerd[1705]: 2025-12-16 13:06:32.517 [INFO][4518] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.131/32] ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.552280 containerd[1705]: 2025-12-16 13:06:32.517 [INFO][4518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf8bd0018e0 ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.552280 containerd[1705]: 2025-12-16 13:06:32.523 [INFO][4518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.552460 containerd[1705]: 2025-12-16 13:06:32.523 [INFO][4518] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0", GenerateName:"calico-apiserver-8684c6f77-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db3c056-ec93-4d31-a2ac-714fd98c713a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8684c6f77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d", Pod:"calico-apiserver-8684c6f77-qprp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf8bd0018e0", MAC:"72:73:18:05:b0:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:32.552460 containerd[1705]: 2025-12-16 13:06:32.540 [INFO][4518] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-qprp2" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--qprp2-eth0" Dec 16 13:06:32.557889 containerd[1705]: time="2025-12-16T13:06:32.557844185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l9vzc,Uid:1bc506a6-247b-4b74-9be4-71a7824366f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe\"" Dec 16 13:06:32.568106 containerd[1705]: time="2025-12-16T13:06:32.567938944Z" level=info msg="CreateContainer within sandbox \"f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:32.606811 containerd[1705]: time="2025-12-16T13:06:32.606789436Z" level=info msg="Container 5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:32.608348 containerd[1705]: time="2025-12-16T13:06:32.608324931Z" level=info msg="connecting to shim efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d" address="unix:///run/containerd/s/6587ea8f6c9800a132ecba01723e27ac95379fe06807051703745b6cdc482c04" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:32.623618 containerd[1705]: time="2025-12-16T13:06:32.623588118Z" level=info msg="CreateContainer within sandbox \"f84fbabffd8129b351cb3e3ac64985fee46c655def18fdebe7e48656d70891fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea\"" Dec 16 13:06:32.623746 systemd[1]: Started cri-containerd-efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d.scope - libcontainer container efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d. Dec 16 13:06:32.624529 containerd[1705]: time="2025-12-16T13:06:32.624261297Z" level=info msg="StartContainer for \"5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea\"" Dec 16 13:06:32.625788 containerd[1705]: time="2025-12-16T13:06:32.625734313Z" level=info msg="connecting to shim 5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea" address="unix:///run/containerd/s/fddd98885fe2a17e1e043c4e9c97f5a7bc19fb431e62f32bf4f65ef30874591f" protocol=ttrpc version=3 Dec 16 13:06:32.649772 systemd[1]: Started cri-containerd-5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea.scope - libcontainer container 5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea. Dec 16 13:06:32.694622 containerd[1705]: time="2025-12-16T13:06:32.694578967Z" level=info msg="StartContainer for \"5a479f2a06fe56e979f91524f5af81ccc9a18a1dc5b2aaa3ba5f33c37e60d9ea\" returns successfully" Dec 16 13:06:32.772067 containerd[1705]: time="2025-12-16T13:06:32.771463629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-qprp2,Uid:2db3c056-ec93-4d31-a2ac-714fd98c713a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"efad6047bf7914ede718c92f790146c096aca4dabef420eb8e926de0ed9f855d\"" Dec 16 13:06:32.774848 containerd[1705]: time="2025-12-16T13:06:32.774793921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:33.137672 containerd[1705]: time="2025-12-16T13:06:33.137621508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:33.140361 containerd[1705]: time="2025-12-16T13:06:33.140312513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:33.140432 containerd[1705]: time="2025-12-16T13:06:33.140324021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:33.140661 kubelet[3171]: E1216 13:06:33.140611 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:33.141121 kubelet[3171]: E1216 13:06:33.140677 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:33.141162 kubelet[3171]: E1216 13:06:33.141127 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cds9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:33.142382 kubelet[3171]: E1216 13:06:33.142329 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:06:33.252085 containerd[1705]: time="2025-12-16T13:06:33.251894218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df6cd5b4c-5j8sl,Uid:320e67c6-de90-4268-be41-844d88ae2859,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:33.351766 systemd-networkd[1337]: cali8fbf608e0fd: Link UP Dec 16 13:06:33.351979 systemd-networkd[1337]: cali8fbf608e0fd: Gained carrier Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.293 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0 calico-kube-controllers-6df6cd5b4c- calico-system 320e67c6-de90-4268-be41-844d88ae2859 850 0 2025-12-16 13:06:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6df6cd5b4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 calico-kube-controllers-6df6cd5b4c-5j8sl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8fbf608e0fd [] [] }} ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.293 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.313 [INFO][4712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" HandleID="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.313 [INFO][4712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" HandleID="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"calico-kube-controllers-6df6cd5b4c-5j8sl", "timestamp":"2025-12-16 13:06:33.313062002 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.313 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.313 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.313 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.319 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.323 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.326 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.327 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.329 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.329 [INFO][4712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.330 [INFO][4712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.336 [INFO][4712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.341 [INFO][4712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.341 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:33.370242 containerd[1705]: 2025-12-16 13:06:33.341 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:33.371156 containerd[1705]: 2025-12-16 13:06:33.341 [INFO][4712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" HandleID="k8s-pod-network.7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.371156 containerd[1705]: 2025-12-16 13:06:33.344 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0", GenerateName:"calico-kube-controllers-6df6cd5b4c-", Namespace:"calico-system", SelfLink:"", UID:"320e67c6-de90-4268-be41-844d88ae2859", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df6cd5b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"calico-kube-controllers-6df6cd5b4c-5j8sl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fbf608e0fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:33.371156 containerd[1705]: 2025-12-16 13:06:33.344 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.132/32] ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.371156 containerd[1705]: 2025-12-16 13:06:33.345 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fbf608e0fd ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.371156 containerd[1705]: 2025-12-16 13:06:33.354 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.371400 containerd[1705]: 2025-12-16 13:06:33.358 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0", GenerateName:"calico-kube-controllers-6df6cd5b4c-", Namespace:"calico-system", SelfLink:"", UID:"320e67c6-de90-4268-be41-844d88ae2859", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df6cd5b4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df", Pod:"calico-kube-controllers-6df6cd5b4c-5j8sl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fbf608e0fd", MAC:"5a:01:83:f9:0a:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:33.371400 containerd[1705]: 2025-12-16 13:06:33.367 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" Namespace="calico-system" Pod="calico-kube-controllers-6df6cd5b4c-5j8sl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--kube--controllers--6df6cd5b4c--5j8sl-eth0" Dec 16 13:06:33.379770 kubelet[3171]: E1216 13:06:33.379645 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:06:33.410393 kubelet[3171]: I1216 13:06:33.410149 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l9vzc" podStartSLOduration=41.410132195 podStartE2EDuration="41.410132195s" podCreationTimestamp="2025-12-16 13:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:33.409684061 +0000 UTC m=+46.267901709" watchObservedRunningTime="2025-12-16 13:06:33.410132195 +0000 UTC m=+46.268349848" Dec 16 13:06:33.437459 containerd[1705]: time="2025-12-16T13:06:33.437379397Z" level=info msg="connecting to shim 7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df" address="unix:///run/containerd/s/8e111d6084a01e01fd0daac1fefa2e3ff3a484ee2f21c3f0d2094a3e4bfef12b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:33.466644 systemd[1]: Started cri-containerd-7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df.scope - libcontainer container 7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df. Dec 16 13:06:33.506347 containerd[1705]: time="2025-12-16T13:06:33.506317888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df6cd5b4c-5j8sl,Uid:320e67c6-de90-4268-be41-844d88ae2859,Namespace:calico-system,Attempt:0,} returns sandbox id \"7599689d7ced422624c6b54f4fea49517f08c36afe4900a50270d7b06545b1df\"" Dec 16 13:06:33.507937 containerd[1705]: time="2025-12-16T13:06:33.507583539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:33.610661 systemd-networkd[1337]: cali91a93e18196: Gained IPv6LL Dec 16 13:06:33.738616 systemd-networkd[1337]: caliaf8bd0018e0: Gained IPv6LL Dec 16 13:06:33.867633 containerd[1705]: time="2025-12-16T13:06:33.867581011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:33.870423 containerd[1705]: time="2025-12-16T13:06:33.870361858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:33.870717 containerd[1705]: time="2025-12-16T13:06:33.870406974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:06:33.870835 kubelet[3171]: E1216 13:06:33.870754 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:33.870895 kubelet[3171]: E1216 13:06:33.870849 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:33.871502 kubelet[3171]: E1216 13:06:33.871413 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz6gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:33.872765 kubelet[3171]: E1216 13:06:33.872732 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:06:34.251737 containerd[1705]: time="2025-12-16T13:06:34.251695761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-vmt4x,Uid:f6e8a896-34e6-4faa-8d21-a2f273b7f6d7,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:06:34.351637 systemd-networkd[1337]: calie11a126c727: Link UP Dec 16 13:06:34.352711 systemd-networkd[1337]: calie11a126c727: Gained carrier Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.297 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0 calico-apiserver-8684c6f77- calico-apiserver f6e8a896-34e6-4faa-8d21-a2f273b7f6d7 851 0 2025-12-16 13:06:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8684c6f77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 calico-apiserver-8684c6f77-vmt4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie11a126c727 [] [] }} ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.297 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.319 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" HandleID="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.319 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" HandleID="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"calico-apiserver-8684c6f77-vmt4x", "timestamp":"2025-12-16 13:06:34.319696642 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.320 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.320 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.320 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.325 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.328 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.332 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.333 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.335 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.335 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.336 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.339 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.347 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.347 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:34.366718 containerd[1705]: 2025-12-16 13:06:34.347 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:34.367939 containerd[1705]: 2025-12-16 13:06:34.347 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" HandleID="k8s-pod-network.dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.367939 containerd[1705]: 2025-12-16 13:06:34.349 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0", GenerateName:"calico-apiserver-8684c6f77-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6e8a896-34e6-4faa-8d21-a2f273b7f6d7", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8684c6f77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"calico-apiserver-8684c6f77-vmt4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie11a126c727", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:34.367939 containerd[1705]: 2025-12-16 13:06:34.349 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.133/32] ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.367939 containerd[1705]: 2025-12-16 13:06:34.349 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie11a126c727 ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.367939 containerd[1705]: 2025-12-16 13:06:34.352 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.368166 containerd[1705]: 2025-12-16 13:06:34.352 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0", GenerateName:"calico-apiserver-8684c6f77-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6e8a896-34e6-4faa-8d21-a2f273b7f6d7", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8684c6f77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab", Pod:"calico-apiserver-8684c6f77-vmt4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie11a126c727", MAC:"e6:93:66:ca:15:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:34.368166 containerd[1705]: 2025-12-16 13:06:34.364 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" Namespace="calico-apiserver" Pod="calico-apiserver-8684c6f77-vmt4x" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-calico--apiserver--8684c6f77--vmt4x-eth0" Dec 16 13:06:34.390963 kubelet[3171]: E1216 13:06:34.390887 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:06:34.391684 kubelet[3171]: E1216 13:06:34.391650 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:06:34.420833 containerd[1705]: time="2025-12-16T13:06:34.420796266Z" level=info msg="connecting to shim dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab" address="unix:///run/containerd/s/9d5558b336eb62245aaecee46bfff322bd5ef3e291af3623f73ecef865c22fc2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:34.445619 systemd[1]: Started cri-containerd-dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab.scope - libcontainer container dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab. Dec 16 13:06:34.484856 containerd[1705]: time="2025-12-16T13:06:34.484827630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8684c6f77-vmt4x,Uid:f6e8a896-34e6-4faa-8d21-a2f273b7f6d7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dfd37b9250600c08c5e3efc7dce8c19515f9d17b159ef641ccb24e41461a9bab\"" Dec 16 13:06:34.485937 containerd[1705]: time="2025-12-16T13:06:34.485913095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:34.861812 containerd[1705]: time="2025-12-16T13:06:34.861751756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:34.864747 containerd[1705]: time="2025-12-16T13:06:34.864633137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:34.864747 containerd[1705]: time="2025-12-16T13:06:34.864688340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:34.864926 kubelet[3171]: E1216 13:06:34.864860 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:34.865015 kubelet[3171]: E1216 13:06:34.864936 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:34.865116 kubelet[3171]: E1216 13:06:34.865084 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcvgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:34.866519 kubelet[3171]: E1216 13:06:34.866440 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:06:34.954624 systemd-networkd[1337]: cali8fbf608e0fd: Gained IPv6LL Dec 16 13:06:35.252511 containerd[1705]: time="2025-12-16T13:06:35.251762116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jf8nl,Uid:f3c338af-232e-46f3-9597-30d05ba9e1ec,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:35.350675 systemd-networkd[1337]: cali0520a190968: Link UP Dec 16 13:06:35.352637 systemd-networkd[1337]: cali0520a190968: Gained carrier Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.294 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0 csi-node-driver- calico-system f3c338af-232e-46f3-9597-30d05ba9e1ec 739 0 2025-12-16 13:06:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 csi-node-driver-jf8nl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0520a190968 [] [] }} ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.294 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.316 [INFO][4863] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" HandleID="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.316 [INFO][4863] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" HandleID="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"csi-node-driver-jf8nl", "timestamp":"2025-12-16 13:06:35.316646065 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.316 [INFO][4863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.316 [INFO][4863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.316 [INFO][4863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.322 [INFO][4863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.325 [INFO][4863] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.328 [INFO][4863] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.329 [INFO][4863] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.331 [INFO][4863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.331 [INFO][4863] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.332 [INFO][4863] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403 Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.339 [INFO][4863] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.346 [INFO][4863] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.134/26] block=192.168.52.128/26 handle="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.346 [INFO][4863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.134/26] handle="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.346 [INFO][4863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:35.373744 containerd[1705]: 2025-12-16 13:06:35.346 [INFO][4863] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.134/26] IPv6=[] ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" HandleID="k8s-pod-network.be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.348 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3c338af-232e-46f3-9597-30d05ba9e1ec", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"csi-node-driver-jf8nl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0520a190968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.348 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.134/32] ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.348 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0520a190968 ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.352 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.353 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3c338af-232e-46f3-9597-30d05ba9e1ec", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403", Pod:"csi-node-driver-jf8nl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0520a190968", MAC:"f6:29:6e:2d:88:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:35.374614 containerd[1705]: 2025-12-16 13:06:35.371 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" Namespace="calico-system" Pod="csi-node-driver-jf8nl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-csi--node--driver--jf8nl-eth0" Dec 16 13:06:35.392643 kubelet[3171]: E1216 13:06:35.392591 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:06:35.395242 kubelet[3171]: E1216 13:06:35.393238 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:06:35.424040 containerd[1705]: time="2025-12-16T13:06:35.423788460Z" level=info msg="connecting to shim be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403" address="unix:///run/containerd/s/d43747a9ad1d44c9d71ba7796c64df84e270ad465fd6bd6bbcb8933c68cd4930" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:35.458609 systemd[1]: Started cri-containerd-be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403.scope - libcontainer container be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403. Dec 16 13:06:35.484593 containerd[1705]: time="2025-12-16T13:06:35.484551824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jf8nl,Uid:f3c338af-232e-46f3-9597-30d05ba9e1ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"be0717908d968c21366d5d0aec8ec96e9ce384a77be7ccb73b6297cfb2fbc403\"" Dec 16 13:06:35.485851 containerd[1705]: time="2025-12-16T13:06:35.485801203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:35.873706 containerd[1705]: time="2025-12-16T13:06:35.873654966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:35.877019 containerd[1705]: time="2025-12-16T13:06:35.876979524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:35.877199 containerd[1705]: time="2025-12-16T13:06:35.877085990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:06:35.877289 kubelet[3171]: E1216 13:06:35.877242 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:35.877332 kubelet[3171]: E1216 13:06:35.877305 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:35.877467 kubelet[3171]: E1216 13:06:35.877434 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:35.880100 containerd[1705]: time="2025-12-16T13:06:35.880067988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:36.231744 containerd[1705]: time="2025-12-16T13:06:36.231626558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:36.234329 containerd[1705]: time="2025-12-16T13:06:36.234281477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:36.234419 containerd[1705]: time="2025-12-16T13:06:36.234370915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:06:36.234581 kubelet[3171]: E1216 13:06:36.234530 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:36.234635 kubelet[3171]: E1216 13:06:36.234593 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:36.234765 kubelet[3171]: E1216 13:06:36.234728 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:36.236147 kubelet[3171]: E1216 13:06:36.236106 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:36.251968 containerd[1705]: time="2025-12-16T13:06:36.251926208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rqstk,Uid:88a83a1e-eebc-46e8-9426-3fba3e5c071e,Namespace:calico-system,Attempt:0,}" Dec 16 13:06:36.352651 systemd-networkd[1337]: cali24591303575: Link UP Dec 16 13:06:36.353329 systemd-networkd[1337]: cali24591303575: Gained carrier Dec 16 13:06:36.363332 systemd-networkd[1337]: calie11a126c727: Gained IPv6LL Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.292 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0 goldmane-666569f655- calico-system 88a83a1e-eebc-46e8-9426-3fba3e5c071e 856 0 2025-12-16 13:06:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 goldmane-666569f655-rqstk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali24591303575 [] [] }} ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.292 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.312 [INFO][4945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" HandleID="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.312 [INFO][4945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" HandleID="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"goldmane-666569f655-rqstk", "timestamp":"2025-12-16 13:06:36.312460243 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.312 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.312 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.312 [INFO][4945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.317 [INFO][4945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.323 [INFO][4945] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.326 [INFO][4945] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.327 [INFO][4945] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.329 [INFO][4945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.329 [INFO][4945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.330 [INFO][4945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3 Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.334 [INFO][4945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.344 [INFO][4945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.135/26] block=192.168.52.128/26 handle="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.344 [INFO][4945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.135/26] handle="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.344 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:36.367578 containerd[1705]: 2025-12-16 13:06:36.344 [INFO][4945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.135/26] IPv6=[] ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" HandleID="k8s-pod-network.b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.346 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"88a83a1e-eebc-46e8-9426-3fba3e5c071e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"goldmane-666569f655-rqstk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali24591303575", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.347 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.135/32] ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.347 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24591303575 ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.353 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.353 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"88a83a1e-eebc-46e8-9426-3fba3e5c071e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3", Pod:"goldmane-666569f655-rqstk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali24591303575", MAC:"32:66:d7:4b:4d:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:36.368715 containerd[1705]: 2025-12-16 13:06:36.365 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" Namespace="calico-system" Pod="goldmane-666569f655-rqstk" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-goldmane--666569f655--rqstk-eth0" Dec 16 13:06:36.395721 kubelet[3171]: E1216 13:06:36.395524 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:06:36.396777 kubelet[3171]: E1216 13:06:36.396735 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:36.412658 containerd[1705]: time="2025-12-16T13:06:36.412622983Z" level=info msg="connecting to shim b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3" address="unix:///run/containerd/s/fce3636b6c7f1222db0ae37279ec742717ab2be93dde7e47490e00380d85af88" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:36.446636 systemd[1]: Started cri-containerd-b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3.scope - libcontainer container b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3. Dec 16 13:06:36.490648 systemd-networkd[1337]: cali0520a190968: Gained IPv6LL Dec 16 13:06:36.522924 containerd[1705]: time="2025-12-16T13:06:36.522815090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rqstk,Uid:88a83a1e-eebc-46e8-9426-3fba3e5c071e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8dd7bcd626303bfd21fb4fca3e8dee6a5fbab709acd6411c6ef2eb9a8eda6d3\"" Dec 16 13:06:36.525512 containerd[1705]: time="2025-12-16T13:06:36.525379379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:36.898278 containerd[1705]: time="2025-12-16T13:06:36.898226482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:36.900892 containerd[1705]: time="2025-12-16T13:06:36.900861390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:36.900978 containerd[1705]: time="2025-12-16T13:06:36.900937388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:36.901192 kubelet[3171]: E1216 13:06:36.901159 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:36.901245 kubelet[3171]: E1216 13:06:36.901207 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:36.901400 kubelet[3171]: E1216 13:06:36.901346 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:36.902838 kubelet[3171]: E1216 13:06:36.902764 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:06:37.253214 containerd[1705]: time="2025-12-16T13:06:37.252788320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sj4bl,Uid:1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:37.353631 systemd-networkd[1337]: calib673fae897a: Link UP Dec 16 13:06:37.354600 systemd-networkd[1337]: calib673fae897a: Gained carrier Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.293 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0 coredns-674b8bbfcf- kube-system 1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e 853 0 2025-12-16 13:05:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-efe6a0b1f4 coredns-674b8bbfcf-sj4bl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib673fae897a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.294 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.316 [INFO][5020] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" HandleID="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.317 [INFO][5020] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" HandleID="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-efe6a0b1f4", "pod":"coredns-674b8bbfcf-sj4bl", "timestamp":"2025-12-16 13:06:37.316956831 +0000 UTC"}, Hostname:"ci-4459.2.2-a-efe6a0b1f4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.317 [INFO][5020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.317 [INFO][5020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.317 [INFO][5020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-efe6a0b1f4' Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.323 [INFO][5020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.327 [INFO][5020] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.331 [INFO][5020] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.332 [INFO][5020] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.334 [INFO][5020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.334 [INFO][5020] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.335 [INFO][5020] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479 Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.339 [INFO][5020] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.348 [INFO][5020] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.136/26] block=192.168.52.128/26 handle="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.348 [INFO][5020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.136/26] handle="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" host="ci-4459.2.2-a-efe6a0b1f4" Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.348 [INFO][5020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:06:37.371380 containerd[1705]: 2025-12-16 13:06:37.348 [INFO][5020] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.136/26] IPv6=[] ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" HandleID="k8s-pod-network.ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Workload="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.373573 containerd[1705]: 2025-12-16 13:06:37.350 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"", Pod:"coredns-674b8bbfcf-sj4bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib673fae897a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:37.373573 containerd[1705]: 2025-12-16 13:06:37.350 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.136/32] ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.373573 containerd[1705]: 2025-12-16 13:06:37.350 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib673fae897a ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.373573 containerd[1705]: 2025-12-16 13:06:37.352 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.373728 containerd[1705]: 2025-12-16 13:06:37.352 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-efe6a0b1f4", ContainerID:"ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479", Pod:"coredns-674b8bbfcf-sj4bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib673fae897a", MAC:"9a:5a:37:25:a1:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:06:37.373728 containerd[1705]: 2025-12-16 13:06:37.367 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" Namespace="kube-system" Pod="coredns-674b8bbfcf-sj4bl" WorkloadEndpoint="ci--4459.2.2--a--efe6a0b1f4-k8s-coredns--674b8bbfcf--sj4bl-eth0" Dec 16 13:06:37.401216 kubelet[3171]: E1216 13:06:37.401029 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:06:37.402862 kubelet[3171]: E1216 13:06:37.402715 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:37.439125 containerd[1705]: time="2025-12-16T13:06:37.439043931Z" level=info msg="connecting to shim ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479" address="unix:///run/containerd/s/d48e0aac1d7b2ee7f75401d24918cbeb769f788148f68105c096a5938ee996aa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:37.474644 systemd[1]: Started cri-containerd-ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479.scope - libcontainer container ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479. Dec 16 13:06:37.515180 containerd[1705]: time="2025-12-16T13:06:37.514370263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sj4bl,Uid:1cdba7b0-c9eb-4700-947f-b2cdf71c9b5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479\"" Dec 16 13:06:37.527699 containerd[1705]: time="2025-12-16T13:06:37.527678274Z" level=info msg="CreateContainer within sandbox \"ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:37.554306 containerd[1705]: time="2025-12-16T13:06:37.553682849Z" level=info msg="Container f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:37.565579 containerd[1705]: time="2025-12-16T13:06:37.565551135Z" level=info msg="CreateContainer within sandbox \"ff95f5ea66313d6a1f2ed518ff55dbad5efc8e48f2f86c12c59c0c179e68b479\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f\"" Dec 16 13:06:37.566425 containerd[1705]: time="2025-12-16T13:06:37.566393412Z" level=info msg="StartContainer for \"f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f\"" Dec 16 13:06:37.567120 containerd[1705]: time="2025-12-16T13:06:37.567089848Z" level=info msg="connecting to shim f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f" address="unix:///run/containerd/s/d48e0aac1d7b2ee7f75401d24918cbeb769f788148f68105c096a5938ee996aa" protocol=ttrpc version=3 Dec 16 13:06:37.591627 systemd[1]: Started cri-containerd-f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f.scope - libcontainer container f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f. Dec 16 13:06:37.626739 containerd[1705]: time="2025-12-16T13:06:37.626708362Z" level=info msg="StartContainer for \"f8a79676724b289dfdb83d06a29bae6b76d8b722aae7b8fa6714e52357b8b42f\" returns successfully" Dec 16 13:06:37.706701 systemd-networkd[1337]: cali24591303575: Gained IPv6LL Dec 16 13:06:38.401538 kubelet[3171]: E1216 13:06:38.401494 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:06:38.427917 kubelet[3171]: I1216 13:06:38.427861 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sj4bl" podStartSLOduration=46.427841821 podStartE2EDuration="46.427841821s" podCreationTimestamp="2025-12-16 13:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:38.426642928 +0000 UTC m=+51.284860578" watchObservedRunningTime="2025-12-16 13:06:38.427841821 +0000 UTC m=+51.286059470" Dec 16 13:06:38.858621 systemd-networkd[1337]: calib673fae897a: Gained IPv6LL Dec 16 13:06:44.252664 containerd[1705]: time="2025-12-16T13:06:44.252468891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:44.590313 containerd[1705]: time="2025-12-16T13:06:44.590179362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:44.592825 containerd[1705]: time="2025-12-16T13:06:44.592787479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:44.592924 containerd[1705]: time="2025-12-16T13:06:44.592874177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:06:44.593134 kubelet[3171]: E1216 13:06:44.593082 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:44.593458 kubelet[3171]: E1216 13:06:44.593149 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:44.593458 kubelet[3171]: E1216 13:06:44.593280 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7be21a1e0aeb4a608313b502cd783836,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:44.595470 containerd[1705]: time="2025-12-16T13:06:44.595404123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:44.980282 containerd[1705]: time="2025-12-16T13:06:44.980231133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:44.982984 containerd[1705]: time="2025-12-16T13:06:44.982938680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:44.983039 containerd[1705]: time="2025-12-16T13:06:44.983027692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:06:44.983209 kubelet[3171]: E1216 13:06:44.983173 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:44.983256 kubelet[3171]: E1216 13:06:44.983222 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:44.983406 kubelet[3171]: E1216 13:06:44.983365 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:44.984701 kubelet[3171]: E1216 13:06:44.984654 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:06:48.252869 containerd[1705]: time="2025-12-16T13:06:48.252825370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:48.620023 containerd[1705]: time="2025-12-16T13:06:48.619966347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:48.622962 containerd[1705]: time="2025-12-16T13:06:48.622848488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:48.622962 containerd[1705]: time="2025-12-16T13:06:48.622876338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:06:48.623946 kubelet[3171]: E1216 13:06:48.623207 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:48.623946 kubelet[3171]: E1216 13:06:48.623263 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:48.623946 kubelet[3171]: E1216 13:06:48.623517 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz6gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:48.624816 containerd[1705]: time="2025-12-16T13:06:48.624641147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:48.625138 kubelet[3171]: E1216 13:06:48.625093 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:06:48.998559 containerd[1705]: time="2025-12-16T13:06:48.998429789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:49.001346 containerd[1705]: time="2025-12-16T13:06:49.001246422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:49.001466 containerd[1705]: time="2025-12-16T13:06:49.001249499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:49.001570 kubelet[3171]: E1216 13:06:49.001540 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:49.001641 kubelet[3171]: E1216 13:06:49.001583 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:49.001770 kubelet[3171]: E1216 13:06:49.001728 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cds9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:49.002957 kubelet[3171]: E1216 13:06:49.002895 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:06:50.253143 containerd[1705]: time="2025-12-16T13:06:50.253092954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:50.627865 containerd[1705]: time="2025-12-16T13:06:50.627824595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:50.630351 containerd[1705]: time="2025-12-16T13:06:50.630322427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:50.630451 containerd[1705]: time="2025-12-16T13:06:50.630375809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:50.630578 kubelet[3171]: E1216 13:06:50.630542 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:50.630885 kubelet[3171]: E1216 13:06:50.630601 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:50.630885 kubelet[3171]: E1216 13:06:50.630801 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcvgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:50.632409 kubelet[3171]: E1216 13:06:50.632061 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:06:50.632567 containerd[1705]: time="2025-12-16T13:06:50.632170282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:50.997211 containerd[1705]: time="2025-12-16T13:06:50.997087090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:50.999624 containerd[1705]: time="2025-12-16T13:06:50.999583750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:50.999700 containerd[1705]: time="2025-12-16T13:06:50.999669527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:06:50.999838 kubelet[3171]: E1216 13:06:50.999796 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:50.999897 kubelet[3171]: E1216 13:06:50.999851 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:51.000031 kubelet[3171]: E1216 13:06:50.999990 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:51.002454 containerd[1705]: time="2025-12-16T13:06:51.002349242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:51.359964 containerd[1705]: time="2025-12-16T13:06:51.359927564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:51.362494 containerd[1705]: time="2025-12-16T13:06:51.362451435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:51.362594 containerd[1705]: time="2025-12-16T13:06:51.362552551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:06:51.362773 kubelet[3171]: E1216 13:06:51.362741 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:51.362843 kubelet[3171]: E1216 13:06:51.362788 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:51.363330 containerd[1705]: time="2025-12-16T13:06:51.363124231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:51.363670 kubelet[3171]: E1216 13:06:51.363377 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:51.364798 kubelet[3171]: E1216 13:06:51.364758 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:06:51.718084 containerd[1705]: time="2025-12-16T13:06:51.717961056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:51.720937 containerd[1705]: time="2025-12-16T13:06:51.720884456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:51.721013 containerd[1705]: time="2025-12-16T13:06:51.720976630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:06:51.721234 kubelet[3171]: E1216 13:06:51.721106 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:51.721234 kubelet[3171]: E1216 13:06:51.721157 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:51.721829 kubelet[3171]: E1216 13:06:51.721351 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:51.723529 kubelet[3171]: E1216 13:06:51.723454 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:06:57.255243 kubelet[3171]: E1216 13:06:57.255190 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:07:01.252935 kubelet[3171]: E1216 13:07:01.252644 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:07:02.253029 kubelet[3171]: E1216 13:07:02.252981 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:07:03.253530 kubelet[3171]: E1216 13:07:03.253179 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:07:04.253473 kubelet[3171]: E1216 13:07:04.253419 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:07:06.254249 kubelet[3171]: E1216 13:07:06.254169 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:07:11.272312 containerd[1705]: time="2025-12-16T13:07:11.270896286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:11.648226 containerd[1705]: time="2025-12-16T13:07:11.648094955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:11.651187 containerd[1705]: time="2025-12-16T13:07:11.651053330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:11.651187 containerd[1705]: time="2025-12-16T13:07:11.651160768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:07:11.652692 kubelet[3171]: E1216 13:07:11.652625 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:11.653368 kubelet[3171]: E1216 13:07:11.653107 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:11.653565 kubelet[3171]: E1216 13:07:11.653514 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7be21a1e0aeb4a608313b502cd783836,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:11.656092 containerd[1705]: time="2025-12-16T13:07:11.655851821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:12.005762 containerd[1705]: time="2025-12-16T13:07:12.005625962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:12.008351 containerd[1705]: time="2025-12-16T13:07:12.008307647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:12.008450 containerd[1705]: time="2025-12-16T13:07:12.008398133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:12.008671 kubelet[3171]: E1216 13:07:12.008632 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:12.008739 kubelet[3171]: E1216 13:07:12.008685 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:12.009147 kubelet[3171]: E1216 13:07:12.008836 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:12.010063 kubelet[3171]: E1216 13:07:12.010001 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:07:15.256317 containerd[1705]: time="2025-12-16T13:07:15.253633863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:15.606696 containerd[1705]: time="2025-12-16T13:07:15.606555079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:15.614944 containerd[1705]: time="2025-12-16T13:07:15.614837023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:15.614944 containerd[1705]: time="2025-12-16T13:07:15.614878936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:15.615126 kubelet[3171]: E1216 13:07:15.615085 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:15.615463 kubelet[3171]: E1216 13:07:15.615144 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:15.615463 kubelet[3171]: E1216 13:07:15.615290 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cds9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:15.617455 kubelet[3171]: E1216 13:07:15.617410 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:07:16.253811 containerd[1705]: time="2025-12-16T13:07:16.253769270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:16.612774 containerd[1705]: time="2025-12-16T13:07:16.612735510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:16.615342 containerd[1705]: time="2025-12-16T13:07:16.615308795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:16.615397 containerd[1705]: time="2025-12-16T13:07:16.615389436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:16.615596 kubelet[3171]: E1216 13:07:16.615553 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:16.616176 kubelet[3171]: E1216 13:07:16.615606 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:16.616176 kubelet[3171]: E1216 13:07:16.615757 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcvgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:16.617534 kubelet[3171]: E1216 13:07:16.617461 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:07:18.253698 containerd[1705]: time="2025-12-16T13:07:18.253411972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:18.610254 containerd[1705]: time="2025-12-16T13:07:18.610213440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:18.612911 containerd[1705]: time="2025-12-16T13:07:18.612862127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:18.612977 containerd[1705]: time="2025-12-16T13:07:18.612964892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:18.613147 kubelet[3171]: E1216 13:07:18.613097 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:18.613436 kubelet[3171]: E1216 13:07:18.613153 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:18.613683 kubelet[3171]: E1216 13:07:18.613595 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz6gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:18.613812 containerd[1705]: time="2025-12-16T13:07:18.613771460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:18.615219 kubelet[3171]: E1216 13:07:18.615141 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:07:18.982442 containerd[1705]: time="2025-12-16T13:07:18.982320005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:18.985213 containerd[1705]: time="2025-12-16T13:07:18.985106663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:18.985213 containerd[1705]: time="2025-12-16T13:07:18.985132699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:07:18.985497 kubelet[3171]: E1216 13:07:18.985432 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:18.985559 kubelet[3171]: E1216 13:07:18.985512 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:18.985680 kubelet[3171]: E1216 13:07:18.985648 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:18.988507 containerd[1705]: time="2025-12-16T13:07:18.988450087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:19.362643 containerd[1705]: time="2025-12-16T13:07:19.362445635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:19.369002 containerd[1705]: time="2025-12-16T13:07:19.368864696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:19.369002 containerd[1705]: time="2025-12-16T13:07:19.368969823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:07:19.369285 kubelet[3171]: E1216 13:07:19.369244 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:19.369350 kubelet[3171]: E1216 13:07:19.369303 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:19.369494 kubelet[3171]: E1216 13:07:19.369439 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:19.371614 kubelet[3171]: E1216 13:07:19.371572 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:07:21.257500 containerd[1705]: time="2025-12-16T13:07:21.257435198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:21.627809 containerd[1705]: time="2025-12-16T13:07:21.627754046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:21.630624 containerd[1705]: time="2025-12-16T13:07:21.630578643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:21.630682 containerd[1705]: time="2025-12-16T13:07:21.630658948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:21.630864 kubelet[3171]: E1216 13:07:21.630811 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:21.631246 kubelet[3171]: E1216 13:07:21.630878 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:21.631246 kubelet[3171]: E1216 13:07:21.631039 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:21.632212 kubelet[3171]: E1216 13:07:21.632174 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:07:25.257038 kubelet[3171]: E1216 13:07:25.256975 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:07:30.252392 kubelet[3171]: E1216 13:07:30.252342 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:07:31.255447 kubelet[3171]: E1216 13:07:31.255107 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:07:31.255905 kubelet[3171]: E1216 13:07:31.255784 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:07:34.252309 kubelet[3171]: E1216 13:07:34.251950 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:07:35.830746 systemd[1]: Started sshd@7-10.200.0.43:22-10.200.16.10:34940.service - OpenSSH per-connection server daemon (10.200.16.10:34940). Dec 16 13:07:36.254079 kubelet[3171]: E1216 13:07:36.254023 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:07:36.383061 sshd[5235]: Accepted publickey for core from 10.200.16.10 port 34940 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:36.384260 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:36.387998 systemd-logind[1684]: New session 10 of user core. Dec 16 13:07:36.393662 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:07:36.884142 sshd[5238]: Connection closed by 10.200.16.10 port 34940 Dec 16 13:07:36.884749 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:36.890257 systemd[1]: sshd@7-10.200.0.43:22-10.200.16.10:34940.service: Deactivated successfully. Dec 16 13:07:36.890608 systemd-logind[1684]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:07:36.894174 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:07:36.898838 systemd-logind[1684]: Removed session 10. Dec 16 13:07:37.256056 kubelet[3171]: E1216 13:07:37.255126 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:07:41.987147 systemd[1]: Started sshd@8-10.200.0.43:22-10.200.16.10:50090.service - OpenSSH per-connection server daemon (10.200.16.10:50090). Dec 16 13:07:42.252735 kubelet[3171]: E1216 13:07:42.252394 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:07:42.543348 sshd[5251]: Accepted publickey for core from 10.200.16.10 port 50090 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:42.546315 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:42.550980 systemd-logind[1684]: New session 11 of user core. Dec 16 13:07:42.555619 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:07:43.045919 sshd[5254]: Connection closed by 10.200.16.10 port 50090 Dec 16 13:07:43.047801 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:43.051684 systemd[1]: sshd@8-10.200.0.43:22-10.200.16.10:50090.service: Deactivated successfully. Dec 16 13:07:43.054982 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:07:43.056915 systemd-logind[1684]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:07:43.058701 systemd-logind[1684]: Removed session 11. Dec 16 13:07:44.253806 kubelet[3171]: E1216 13:07:44.253711 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:07:46.254414 kubelet[3171]: E1216 13:07:46.252904 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:07:48.151227 systemd[1]: Started sshd@9-10.200.0.43:22-10.200.16.10:50094.service - OpenSSH per-connection server daemon (10.200.16.10:50094). Dec 16 13:07:48.253276 kubelet[3171]: E1216 13:07:48.253228 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:07:48.256817 kubelet[3171]: E1216 13:07:48.256755 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:07:48.716667 sshd[5269]: Accepted publickey for core from 10.200.16.10 port 50094 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:48.717782 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:48.722463 systemd-logind[1684]: New session 12 of user core. Dec 16 13:07:48.728669 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:49.160979 sshd[5272]: Connection closed by 10.200.16.10 port 50094 Dec 16 13:07:49.161571 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:49.164618 systemd[1]: sshd@9-10.200.0.43:22-10.200.16.10:50094.service: Deactivated successfully. Dec 16 13:07:49.166558 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:49.168428 systemd-logind[1684]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:49.169455 systemd-logind[1684]: Removed session 12. Dec 16 13:07:49.259052 systemd[1]: Started sshd@10-10.200.0.43:22-10.200.16.10:50102.service - OpenSSH per-connection server daemon (10.200.16.10:50102). Dec 16 13:07:49.821520 sshd[5285]: Accepted publickey for core from 10.200.16.10 port 50102 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:49.823691 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:49.833025 systemd-logind[1684]: New session 13 of user core. Dec 16 13:07:49.837004 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:07:50.252085 kubelet[3171]: E1216 13:07:50.252037 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:07:50.323005 sshd[5291]: Connection closed by 10.200.16.10 port 50102 Dec 16 13:07:50.325727 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:50.329222 systemd-logind[1684]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:07:50.331732 systemd[1]: sshd@10-10.200.0.43:22-10.200.16.10:50102.service: Deactivated successfully. Dec 16 13:07:50.335051 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:07:50.339214 systemd-logind[1684]: Removed session 13. Dec 16 13:07:50.430722 systemd[1]: Started sshd@11-10.200.0.43:22-10.200.16.10:41628.service - OpenSSH per-connection server daemon (10.200.16.10:41628). Dec 16 13:07:50.990947 sshd[5302]: Accepted publickey for core from 10.200.16.10 port 41628 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:50.993029 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:50.999225 systemd-logind[1684]: New session 14 of user core. Dec 16 13:07:51.006639 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:07:51.479032 sshd[5305]: Connection closed by 10.200.16.10 port 41628 Dec 16 13:07:51.479847 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:51.484075 systemd-logind[1684]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:07:51.486013 systemd[1]: sshd@11-10.200.0.43:22-10.200.16.10:41628.service: Deactivated successfully. Dec 16 13:07:51.489361 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:07:51.495111 systemd-logind[1684]: Removed session 14. Dec 16 13:07:56.255505 kubelet[3171]: E1216 13:07:56.254659 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:07:56.584880 systemd[1]: Started sshd@12-10.200.0.43:22-10.200.16.10:41640.service - OpenSSH per-connection server daemon (10.200.16.10:41640). Dec 16 13:07:57.142677 sshd[5325]: Accepted publickey for core from 10.200.16.10 port 41640 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:57.144718 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:57.150875 systemd-logind[1684]: New session 15 of user core. Dec 16 13:07:57.156651 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:57.254145 containerd[1705]: time="2025-12-16T13:07:57.253870491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:57.614976 sshd[5328]: Connection closed by 10.200.16.10 port 41640 Dec 16 13:07:57.615728 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:57.621284 systemd[1]: sshd@12-10.200.0.43:22-10.200.16.10:41640.service: Deactivated successfully. Dec 16 13:07:57.625115 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:57.627650 systemd-logind[1684]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:57.630536 systemd-logind[1684]: Removed session 15. Dec 16 13:07:57.633409 containerd[1705]: time="2025-12-16T13:07:57.633372109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:57.636153 containerd[1705]: time="2025-12-16T13:07:57.636107983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:57.636225 containerd[1705]: time="2025-12-16T13:07:57.636136792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:57.636368 kubelet[3171]: E1216 13:07:57.636332 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:57.636609 kubelet[3171]: E1216 13:07:57.636386 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:57.637320 kubelet[3171]: E1216 13:07:57.637236 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cds9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-qprp2_calico-apiserver(2db3c056-ec93-4d31-a2ac-714fd98c713a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:57.638840 kubelet[3171]: E1216 13:07:57.638804 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:07:57.719730 systemd[1]: Started sshd@13-10.200.0.43:22-10.200.16.10:41654.service - OpenSSH per-connection server daemon (10.200.16.10:41654). Dec 16 13:07:58.295384 sshd[5342]: Accepted publickey for core from 10.200.16.10 port 41654 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:58.296903 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:58.300298 systemd-logind[1684]: New session 16 of user core. Dec 16 13:07:58.304627 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:58.767575 sshd[5345]: Connection closed by 10.200.16.10 port 41654 Dec 16 13:07:58.768148 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:58.770954 systemd[1]: sshd@13-10.200.0.43:22-10.200.16.10:41654.service: Deactivated successfully. Dec 16 13:07:58.773052 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:58.775454 systemd-logind[1684]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:58.776313 systemd-logind[1684]: Removed session 16. Dec 16 13:07:58.866049 systemd[1]: Started sshd@14-10.200.0.43:22-10.200.16.10:41660.service - OpenSSH per-connection server daemon (10.200.16.10:41660). Dec 16 13:07:59.256005 containerd[1705]: time="2025-12-16T13:07:59.255958230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:59.430514 sshd[5355]: Accepted publickey for core from 10.200.16.10 port 41660 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:59.431514 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:59.442611 systemd-logind[1684]: New session 17 of user core. Dec 16 13:07:59.449373 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:59.624862 containerd[1705]: time="2025-12-16T13:07:59.624814726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:59.628298 containerd[1705]: time="2025-12-16T13:07:59.628202359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:59.628399 containerd[1705]: time="2025-12-16T13:07:59.628252097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:59.628743 kubelet[3171]: E1216 13:07:59.628707 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:59.629058 kubelet[3171]: E1216 13:07:59.628890 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:59.630502 kubelet[3171]: E1216 13:07:59.629601 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcvgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8684c6f77-vmt4x_calico-apiserver(f6e8a896-34e6-4faa-8d21-a2f273b7f6d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:59.630767 kubelet[3171]: E1216 13:07:59.630732 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:08:00.254227 containerd[1705]: time="2025-12-16T13:08:00.254069382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:00.410536 sshd[5376]: Connection closed by 10.200.16.10 port 41660 Dec 16 13:08:00.412069 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:00.415210 systemd[1]: sshd@14-10.200.0.43:22-10.200.16.10:41660.service: Deactivated successfully. Dec 16 13:08:00.417458 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:08:00.418499 systemd-logind[1684]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:08:00.419812 systemd-logind[1684]: Removed session 17. Dec 16 13:08:00.516742 systemd[1]: Started sshd@15-10.200.0.43:22-10.200.16.10:32796.service - OpenSSH per-connection server daemon (10.200.16.10:32796). Dec 16 13:08:00.689641 containerd[1705]: time="2025-12-16T13:08:00.689510816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:00.693169 containerd[1705]: time="2025-12-16T13:08:00.693068484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:00.693570 containerd[1705]: time="2025-12-16T13:08:00.693092892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:00.693688 kubelet[3171]: E1216 13:08:00.693647 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:00.694344 kubelet[3171]: E1216 13:08:00.693705 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:00.694344 kubelet[3171]: E1216 13:08:00.693865 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz6gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6df6cd5b4c-5j8sl_calico-system(320e67c6-de90-4268-be41-844d88ae2859): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:00.695426 kubelet[3171]: E1216 13:08:00.695388 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:08:01.079506 sshd[5398]: Accepted publickey for core from 10.200.16.10 port 32796 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:01.080397 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:01.085078 systemd-logind[1684]: New session 18 of user core. Dec 16 13:08:01.089952 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:08:01.254350 containerd[1705]: time="2025-12-16T13:08:01.254314680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:01.679701 sshd[5401]: Connection closed by 10.200.16.10 port 32796 Dec 16 13:08:01.680258 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:01.683597 systemd[1]: sshd@15-10.200.0.43:22-10.200.16.10:32796.service: Deactivated successfully. Dec 16 13:08:01.685511 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:08:01.686306 systemd-logind[1684]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:08:01.688010 systemd-logind[1684]: Removed session 18. Dec 16 13:08:01.763399 containerd[1705]: time="2025-12-16T13:08:01.763352936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:01.766540 containerd[1705]: time="2025-12-16T13:08:01.766496991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:01.766637 containerd[1705]: time="2025-12-16T13:08:01.766501822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:01.766838 kubelet[3171]: E1216 13:08:01.766802 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:01.767071 kubelet[3171]: E1216 13:08:01.766855 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:01.767071 kubelet[3171]: E1216 13:08:01.766987 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7be21a1e0aeb4a608313b502cd783836,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:01.770292 containerd[1705]: time="2025-12-16T13:08:01.770257359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:01.795727 systemd[1]: Started sshd@16-10.200.0.43:22-10.200.16.10:32800.service - OpenSSH per-connection server daemon (10.200.16.10:32800). Dec 16 13:08:02.143690 containerd[1705]: time="2025-12-16T13:08:02.143642400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:02.149411 containerd[1705]: time="2025-12-16T13:08:02.149256644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:02.149411 containerd[1705]: time="2025-12-16T13:08:02.149368384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:02.149816 kubelet[3171]: E1216 13:08:02.149739 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:02.149816 kubelet[3171]: E1216 13:08:02.149797 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:02.150071 kubelet[3171]: E1216 13:08:02.150032 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjvzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59878fbb86-xz6hr_calico-system(8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:02.151749 kubelet[3171]: E1216 13:08:02.151672 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:08:02.363905 sshd[5425]: Accepted publickey for core from 10.200.16.10 port 32800 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:02.365033 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:02.369532 systemd-logind[1684]: New session 19 of user core. Dec 16 13:08:02.371642 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:08:02.832856 sshd[5428]: Connection closed by 10.200.16.10 port 32800 Dec 16 13:08:02.833396 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:02.837139 systemd-logind[1684]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:08:02.837973 systemd[1]: sshd@16-10.200.0.43:22-10.200.16.10:32800.service: Deactivated successfully. Dec 16 13:08:02.840557 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:08:02.845407 systemd-logind[1684]: Removed session 19. Dec 16 13:08:05.269968 containerd[1705]: time="2025-12-16T13:08:05.269721348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:05.633147 containerd[1705]: time="2025-12-16T13:08:05.633097642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:05.635919 containerd[1705]: time="2025-12-16T13:08:05.635870013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:05.636035 containerd[1705]: time="2025-12-16T13:08:05.635960764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:05.637501 kubelet[3171]: E1216 13:08:05.636240 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:05.637501 kubelet[3171]: E1216 13:08:05.636297 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:05.637501 kubelet[3171]: E1216 13:08:05.636458 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rqstk_calico-system(88a83a1e-eebc-46e8-9426-3fba3e5c071e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:05.638020 kubelet[3171]: E1216 13:08:05.637774 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:08:07.940229 systemd[1]: Started sshd@17-10.200.0.43:22-10.200.16.10:32812.service - OpenSSH per-connection server daemon (10.200.16.10:32812). Dec 16 13:08:08.492196 sshd[5444]: Accepted publickey for core from 10.200.16.10 port 32812 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:08.494087 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:08.499225 systemd-logind[1684]: New session 20 of user core. Dec 16 13:08:08.503634 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:08:08.966513 sshd[5454]: Connection closed by 10.200.16.10 port 32812 Dec 16 13:08:08.967055 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:08.971330 systemd[1]: sshd@17-10.200.0.43:22-10.200.16.10:32812.service: Deactivated successfully. Dec 16 13:08:08.974270 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:08:08.975278 systemd-logind[1684]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:08:08.977369 systemd-logind[1684]: Removed session 20. Dec 16 13:08:09.257003 kubelet[3171]: E1216 13:08:09.256594 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:08:11.255000 kubelet[3171]: E1216 13:08:11.254958 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:08:11.256968 containerd[1705]: time="2025-12-16T13:08:11.256705476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:11.258684 kubelet[3171]: E1216 13:08:11.258647 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:08:11.789923 containerd[1705]: time="2025-12-16T13:08:11.789863281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:11.792820 containerd[1705]: time="2025-12-16T13:08:11.792769266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:11.792937 containerd[1705]: time="2025-12-16T13:08:11.792877647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:11.793092 kubelet[3171]: E1216 13:08:11.793014 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:11.793092 kubelet[3171]: E1216 13:08:11.793071 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:11.793595 kubelet[3171]: E1216 13:08:11.793442 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:11.796522 containerd[1705]: time="2025-12-16T13:08:11.796339343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:12.151017 containerd[1705]: time="2025-12-16T13:08:12.150969876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:12.153592 containerd[1705]: time="2025-12-16T13:08:12.153536876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:12.153734 containerd[1705]: time="2025-12-16T13:08:12.153641885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:12.153860 kubelet[3171]: E1216 13:08:12.153794 3171 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:12.153925 kubelet[3171]: E1216 13:08:12.153873 3171 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:12.154069 kubelet[3171]: E1216 13:08:12.154029 3171 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jf8nl_calico-system(f3c338af-232e-46f3-9597-30d05ba9e1ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:12.155242 kubelet[3171]: E1216 13:08:12.155182 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:08:14.067737 systemd[1]: Started sshd@18-10.200.0.43:22-10.200.16.10:53352.service - OpenSSH per-connection server daemon (10.200.16.10:53352). Dec 16 13:08:14.630461 sshd[5466]: Accepted publickey for core from 10.200.16.10 port 53352 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:14.631601 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:14.635344 systemd-logind[1684]: New session 21 of user core. Dec 16 13:08:14.639657 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:08:15.084724 sshd[5469]: Connection closed by 10.200.16.10 port 53352 Dec 16 13:08:15.086078 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:15.091614 systemd[1]: sshd@18-10.200.0.43:22-10.200.16.10:53352.service: Deactivated successfully. Dec 16 13:08:15.094465 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:08:15.096222 systemd-logind[1684]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:08:15.098657 systemd-logind[1684]: Removed session 21. Dec 16 13:08:16.252600 kubelet[3171]: E1216 13:08:16.252525 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59878fbb86-xz6hr" podUID="8813b7f5-162a-48c4-adf1-e3bb0aa1a8c9" Dec 16 13:08:17.255097 kubelet[3171]: E1216 13:08:17.255048 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rqstk" podUID="88a83a1e-eebc-46e8-9426-3fba3e5c071e" Dec 16 13:08:20.187736 systemd[1]: Started sshd@19-10.200.0.43:22-10.200.16.10:35774.service - OpenSSH per-connection server daemon (10.200.16.10:35774). Dec 16 13:08:20.742582 sshd[5481]: Accepted publickey for core from 10.200.16.10 port 35774 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:20.743577 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:20.749120 systemd-logind[1684]: New session 22 of user core. Dec 16 13:08:20.755645 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:08:21.183375 sshd[5484]: Connection closed by 10.200.16.10 port 35774 Dec 16 13:08:21.183968 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:21.187546 systemd[1]: sshd@19-10.200.0.43:22-10.200.16.10:35774.service: Deactivated successfully. Dec 16 13:08:21.189421 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:08:21.190347 systemd-logind[1684]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:08:21.192315 systemd-logind[1684]: Removed session 22. Dec 16 13:08:22.253054 kubelet[3171]: E1216 13:08:22.252763 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-vmt4x" podUID="f6e8a896-34e6-4faa-8d21-a2f273b7f6d7" Dec 16 13:08:23.256424 kubelet[3171]: E1216 13:08:23.256377 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8684c6f77-qprp2" podUID="2db3c056-ec93-4d31-a2ac-714fd98c713a" Dec 16 13:08:23.261424 kubelet[3171]: E1216 13:08:23.261383 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jf8nl" podUID="f3c338af-232e-46f3-9597-30d05ba9e1ec" Dec 16 13:08:25.257641 kubelet[3171]: E1216 13:08:25.256654 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6df6cd5b4c-5j8sl" podUID="320e67c6-de90-4268-be41-844d88ae2859" Dec 16 13:08:26.291431 systemd[1]: Started sshd@20-10.200.0.43:22-10.200.16.10:35790.service - OpenSSH per-connection server daemon (10.200.16.10:35790). Dec 16 13:08:26.855845 sshd[5499]: Accepted publickey for core from 10.200.16.10 port 35790 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:26.857643 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:26.864631 systemd-logind[1684]: New session 23 of user core. Dec 16 13:08:26.869646 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:08:27.303543 sshd[5502]: Connection closed by 10.200.16.10 port 35790 Dec 16 13:08:27.304081 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:27.308113 systemd-logind[1684]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:08:27.309331 systemd[1]: sshd@20-10.200.0.43:22-10.200.16.10:35790.service: Deactivated successfully. Dec 16 13:08:27.312710 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:08:27.314649 systemd-logind[1684]: Removed session 23.