Dec 16 13:04:54.985175 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:04:54.985204 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:54.985217 kernel: BIOS-provided physical RAM map: Dec 16 13:04:54.985224 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:04:54.985230 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:04:54.985237 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:04:54.985246 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:04:54.985253 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:04:54.985260 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:04:54.985270 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:04:54.985277 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:04:54.985285 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:04:54.985291 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:04:54.985298 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:04:54.985307 kernel: NX (Execute Disable) protection: active Dec 16 13:04:54.985316 kernel: APIC: Static calls initialized Dec 16 13:04:54.985323 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:04:54.985331 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9ab698 RNG=0x3ffd2018 Dec 16 13:04:54.985339 kernel: random: crng init done Dec 16 13:04:54.985347 kernel: secureboot: Secure boot disabled Dec 16 13:04:54.985355 kernel: SMBIOS 3.1.0 present. Dec 16 13:04:54.985363 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:04:54.985370 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:04:54.985377 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:04:54.985385 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:04:54.985392 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:04:54.985400 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:04:54.985408 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:04:54.985416 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:54.985424 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:54.985432 kernel: tsc: Detected 2299.999 MHz processor Dec 16 13:04:54.985440 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:04:54.985449 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:04:54.985457 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:04:54.985465 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:04:54.985473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:04:54.985482 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:04:54.985490 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:04:54.985498 kernel: Using GB pages for direct mapping Dec 16 13:04:54.985507 kernel: ACPI: Early table checksum verification disabled Dec 16 13:04:54.985518 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:04:54.985527 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985537 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985545 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:04:54.985552 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:04:54.985560 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985569 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985577 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985586 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:54.985596 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:54.985604 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:54.985612 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:04:54.985620 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:04:54.985628 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:04:54.985636 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:04:54.985644 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:04:54.985652 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:04:54.985661 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:04:54.985671 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:04:54.985685 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:04:54.985694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:04:54.985703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:04:54.985711 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:04:54.985719 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:04:54.985726 kernel: Zone ranges: Dec 16 13:04:54.985734 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:04:54.985742 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:04:54.985751 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:54.985758 kernel: Device empty Dec 16 13:04:54.985766 kernel: Movable zone start for each node Dec 16 13:04:54.985775 kernel: Early memory node ranges Dec 16 13:04:54.985783 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:04:54.985792 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:04:54.985800 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:04:54.985808 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:04:54.985816 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:54.985826 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:04:54.985834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:04:54.985842 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:04:54.985850 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:04:54.985858 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:04:54.985867 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:04:54.985875 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:04:54.985884 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:04:54.985892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:04:54.985902 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:04:54.985910 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:04:54.985918 kernel: TSC deadline timer available Dec 16 13:04:54.985926 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:04:54.985934 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:04:54.985943 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:04:54.985951 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:04:54.985960 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:04:54.985968 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:04:54.985977 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:04:54.985987 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:04:54.985995 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:04:54.986002 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:04:54.988060 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:04:54.988071 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:04:54.988079 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:04:54.988087 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:04:54.988096 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:04:54.988104 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:04:54.988119 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:54.988128 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:04:54.988135 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:04:54.988144 kernel: Fallback order for Node 0: 0 Dec 16 13:04:54.988151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:04:54.988159 kernel: Policy zone: Normal Dec 16 13:04:54.988168 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:04:54.988176 kernel: software IO TLB: area num 2. Dec 16 13:04:54.988186 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:04:54.988195 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:04:54.988203 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:04:54.988211 kernel: Dynamic Preempt: voluntary Dec 16 13:04:54.988220 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:04:54.988228 kernel: rcu: RCU event tracing is enabled. Dec 16 13:04:54.988244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:04:54.988255 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:04:54.988264 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:04:54.988273 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:04:54.988282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:04:54.988292 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:04:54.988302 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.988311 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.988320 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:54.988329 kernel: Using NULL legacy PIC Dec 16 13:04:54.988340 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:04:54.988349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:04:54.988358 kernel: Console: colour dummy device 80x25 Dec 16 13:04:54.988367 kernel: printk: legacy console [tty1] enabled Dec 16 13:04:54.988375 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:04:54.988384 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:04:54.988393 kernel: ACPI: Core revision 20240827 Dec 16 13:04:54.988402 kernel: Failed to register legacy timer interrupt Dec 16 13:04:54.988411 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:04:54.988421 kernel: x2apic enabled Dec 16 13:04:54.988430 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:04:54.988439 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:04:54.988448 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:04:54.988458 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:04:54.988467 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:04:54.988476 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:04:54.988486 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:04:54.988495 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:04:54.988507 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:04:54.988516 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:04:54.988526 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:04:54.988535 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 16 13:04:54.988544 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Dec 16 13:04:54.988553 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:04:54.988562 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:04:54.988571 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:04:54.988580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:04:54.988588 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:04:54.988599 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:04:54.988608 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:04:54.988617 kernel: RETBleed: Vulnerable Dec 16 13:04:54.988626 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:04:54.988635 kernel: active return thunk: its_return_thunk Dec 16 13:04:54.988644 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:04:54.988653 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:04:54.988662 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:04:54.988670 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:04:54.988679 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:04:54.988690 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:04:54.988699 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:04:54.988708 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:04:54.988717 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:04:54.988726 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:04:54.988735 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:04:54.988743 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:04:54.988752 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:04:54.988761 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:04:54.988770 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:04:54.988779 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:04:54.988787 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:04:54.988798 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:04:54.988807 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:04:54.988816 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:04:54.988825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:04:54.988833 kernel: landlock: Up and running. Dec 16 13:04:54.988842 kernel: SELinux: Initializing. Dec 16 13:04:54.988851 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.988860 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.988869 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:04:54.988878 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:04:54.988888 kernel: signal: max sigframe size: 11952 Dec 16 13:04:54.988899 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:04:54.988908 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:04:54.988918 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:04:54.988927 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:04:54.988936 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:04:54.988945 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:04:54.988954 kernel: .... node #0, CPUs: #1 Dec 16 13:04:54.988963 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:04:54.988973 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:04:54.988984 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308180K reserved, 0K cma-reserved) Dec 16 13:04:54.988993 kernel: devtmpfs: initialized Dec 16 13:04:54.989003 kernel: x86/mm: Memory block size: 128MB Dec 16 13:04:54.989025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:04:54.989034 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:04:54.989042 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:04:54.989051 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:04:54.989059 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:04:54.989067 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:04:54.989078 kernel: audit: type=2000 audit(1765890291.071:1): state=initialized audit_enabled=0 res=1 Dec 16 13:04:54.989087 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:04:54.989097 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:04:54.989105 kernel: cpuidle: using governor menu Dec 16 13:04:54.989114 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:04:54.989122 kernel: dca service started, version 1.12.1 Dec 16 13:04:54.989131 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:04:54.989138 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:04:54.989148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:04:54.989157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:04:54.989165 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:04:54.989174 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:04:54.989182 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:04:54.989191 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:04:54.989199 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:04:54.989208 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:04:54.989217 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:04:54.989227 kernel: ACPI: Interpreter enabled Dec 16 13:04:54.989236 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:04:54.989245 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:04:54.989255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:04:54.989264 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:04:54.989273 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:04:54.989282 kernel: iommu: Default domain type: Translated Dec 16 13:04:54.989291 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:04:54.989299 kernel: efivars: Registered efivars operations Dec 16 13:04:54.989308 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:04:54.989319 kernel: PCI: System does not support PCI Dec 16 13:04:54.989328 kernel: vgaarb: loaded Dec 16 13:04:54.989337 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:04:54.989346 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:04:54.989355 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:04:54.989364 kernel: pnp: PnP ACPI init Dec 16 13:04:54.989373 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:04:54.989382 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:04:54.989390 kernel: NET: Registered PF_INET protocol family Dec 16 13:04:54.989401 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:04:54.989411 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:04:54.989420 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:04:54.989429 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:04:54.989439 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:04:54.989447 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:04:54.989456 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.989465 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:54.989476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:04:54.989485 kernel: NET: Registered PF_XDP protocol family Dec 16 13:04:54.989494 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:04:54.989503 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:04:54.989513 kernel: software IO TLB: mapped [mem 0x000000003a9ab000-0x000000003e9ab000] (64MB) Dec 16 13:04:54.989522 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:04:54.989530 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:04:54.989539 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 16 13:04:54.989548 kernel: clocksource: Switched to clocksource tsc Dec 16 13:04:54.989559 kernel: Initialise system trusted keyrings Dec 16 13:04:54.989568 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:04:54.989577 kernel: Key type asymmetric registered Dec 16 13:04:54.989586 kernel: Asymmetric key parser 'x509' registered Dec 16 13:04:54.989595 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:04:54.989603 kernel: io scheduler mq-deadline registered Dec 16 13:04:54.989612 kernel: io scheduler kyber registered Dec 16 13:04:54.989620 kernel: io scheduler bfq registered Dec 16 13:04:54.989629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:04:54.989639 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:04:54.989648 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:54.989657 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:04:54.989667 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:54.989676 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:04:54.989816 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:04:54.989892 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:04:54 UTC (1765890294) Dec 16 13:04:54.989962 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:04:54.989973 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:04:54.989982 kernel: efifb: probing for efifb Dec 16 13:04:54.989991 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:04:54.990000 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:04:54.992074 kernel: efifb: scrolling: redraw Dec 16 13:04:54.992087 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:04:54.992097 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:54.992106 kernel: fb0: EFI VGA frame buffer device Dec 16 13:04:54.992115 kernel: pstore: Using crash dump compression: deflate Dec 16 13:04:54.992128 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:04:54.992138 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:04:54.992147 kernel: Segment Routing with IPv6 Dec 16 13:04:54.992156 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:04:54.992164 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:04:54.992173 kernel: Key type dns_resolver registered Dec 16 13:04:54.992183 kernel: IPI shorthand broadcast: enabled Dec 16 13:04:54.992192 kernel: sched_clock: Marking stable (3077004574, 102671779)->(3493813934, -314137581) Dec 16 13:04:54.992201 kernel: registered taskstats version 1 Dec 16 13:04:54.992213 kernel: Loading compiled-in X.509 certificates Dec 16 13:04:54.992222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:04:54.992231 kernel: Demotion targets for Node 0: null Dec 16 13:04:54.992240 kernel: Key type .fscrypt registered Dec 16 13:04:54.992248 kernel: Key type fscrypt-provisioning registered Dec 16 13:04:54.992257 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:04:54.992266 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:04:54.992275 kernel: ima: No architecture policies found Dec 16 13:04:54.992284 kernel: clk: Disabling unused clocks Dec 16 13:04:54.992294 kernel: Warning: unable to open an initial console. Dec 16 13:04:54.992303 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:04:54.992313 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:04:54.992322 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:04:54.992332 kernel: Run /init as init process Dec 16 13:04:54.992341 kernel: with arguments: Dec 16 13:04:54.992351 kernel: /init Dec 16 13:04:54.992360 kernel: with environment: Dec 16 13:04:54.992369 kernel: HOME=/ Dec 16 13:04:54.992380 kernel: TERM=linux Dec 16 13:04:54.992391 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:04:54.992404 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:54.992415 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:54.992425 systemd[1]: Detected architecture x86-64. Dec 16 13:04:54.992436 systemd[1]: Running in initrd. Dec 16 13:04:54.992445 systemd[1]: No hostname configured, using default hostname. Dec 16 13:04:54.992457 systemd[1]: Hostname set to . Dec 16 13:04:54.992467 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:54.992476 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:04:54.992486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:54.992496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:54.992506 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:04:54.992516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:54.992526 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:04:54.992538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:04:54.992550 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:04:54.992559 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:04:54.992569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:54.992578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:54.992587 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:54.992597 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:54.992608 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:54.992617 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:54.992627 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:54.992636 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:54.992646 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:04:54.992654 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:04:54.992663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:54.992672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:54.992682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:54.992692 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:54.992701 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:04:54.992710 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:54.992719 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:04:54.992728 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:04:54.992738 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:04:54.992747 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:54.992757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:54.992776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:54.992786 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:54.992796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:54.992829 systemd-journald[186]: Collecting audit messages is disabled. Dec 16 13:04:54.992852 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:04:54.992861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:54.992874 systemd-journald[186]: Journal started Dec 16 13:04:54.992897 systemd-journald[186]: Runtime Journal (/run/log/journal/26ed2e3447f745dda686ca4c33a67887) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:54.987068 systemd-modules-load[187]: Inserted module 'overlay' Dec 16 13:04:55.004355 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:55.007328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.015120 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:04:55.017897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:55.027796 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:55.038136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:04:55.037281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:55.042685 kernel: Bridge firewalling registered Dec 16 13:04:55.041562 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 16 13:04:55.046272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:55.050112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:55.060673 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:04:55.061827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:55.067208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:55.069528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:55.070959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:55.085196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:55.089119 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:04:55.107586 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:55.123296 systemd-resolved[217]: Positive Trust Anchors: Dec 16 13:04:55.123311 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:55.123349 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:55.144326 systemd-resolved[217]: Defaulting to hostname 'linux'. Dec 16 13:04:55.147406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:55.152230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:55.191028 kernel: SCSI subsystem initialized Dec 16 13:04:55.199027 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:04:55.210031 kernel: iscsi: registered transport (tcp) Dec 16 13:04:55.228295 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:04:55.228339 kernel: QLogic iSCSI HBA Driver Dec 16 13:04:55.242830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:55.260582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:55.261663 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:55.294739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:55.297531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:04:55.348027 kernel: raid6: avx512x4 gen() 43141 MB/s Dec 16 13:04:55.365022 kernel: raid6: avx512x2 gen() 41934 MB/s Dec 16 13:04:55.383019 kernel: raid6: avx512x1 gen() 25500 MB/s Dec 16 13:04:55.400019 kernel: raid6: avx2x4 gen() 35862 MB/s Dec 16 13:04:55.419019 kernel: raid6: avx2x2 gen() 36937 MB/s Dec 16 13:04:55.436803 kernel: raid6: avx2x1 gen() 29043 MB/s Dec 16 13:04:55.436830 kernel: raid6: using algorithm avx512x4 gen() 43141 MB/s Dec 16 13:04:55.456088 kernel: raid6: .... xor() 7256 MB/s, rmw enabled Dec 16 13:04:55.456104 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:04:55.475032 kernel: xor: automatically using best checksumming function avx Dec 16 13:04:55.600038 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:04:55.604954 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:55.609235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:55.630547 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 16 13:04:55.634762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:55.643386 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:04:55.658656 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Dec 16 13:04:55.679072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:55.683671 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:55.718173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:55.724459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:04:55.764026 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:04:55.775021 kernel: AES CTR mode by8 optimization enabled Dec 16 13:04:55.808053 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:04:55.817804 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:04:55.817848 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:04:55.826379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:55.828861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.833684 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:04:55.834180 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.840950 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:04:55.840970 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 13:04:55.844880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.853039 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:04:55.856030 kernel: PTP clock support registered Dec 16 13:04:55.857926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:55.860544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.870365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:55.884424 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:04:55.884459 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:04:55.884472 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:04:55.884483 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:04:55.897246 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 13:04:55.897287 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:04:55.900075 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:04:55.902041 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:04:55.682185 systemd-resolved[217]: Clock change detected. Flushing caches. Dec 16 13:04:55.691943 systemd-journald[186]: Time jumped backwards, rotating. Dec 16 13:04:55.691992 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:04:55.692004 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:55.690087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:55.700744 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:04:55.700906 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:04:55.704819 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:04:55.705181 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:55.715516 kernel: scsi host0: storvsc_host_t Dec 16 13:04:55.715614 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a4c0e53 (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:04:55.715695 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:04:55.721706 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:04:55.726527 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:04:55.740524 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:04:55.740692 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:04:55.745418 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:04:55.751259 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:55.751530 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:04:55.764413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:55.774413 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:04:55.774614 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:55.790417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:55.938418 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:04:55.943424 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:56.310423 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:04:56.501798 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:04:56.514801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:56.556919 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:04:56.583356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:56.587251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:56.589201 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:56.594088 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:56.594172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:56.594202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:56.596506 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:04:56.597433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:04:56.622043 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:56.631414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:56.642424 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:56.750643 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:56.757010 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:04:56.757209 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:04:56.759557 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:56.785417 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:04:56.785471 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:04:56.785489 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:04:56.785505 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:04:56.811217 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:56.811386 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:04:56.811551 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:04:56.818673 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:56.831408 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:04:56.836143 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a4c0e53 eth0: VF registering: eth1 Dec 16 13:04:56.836298 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:04:56.841666 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:04:57.648498 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:57.649096 disk-uuid[661]: The operation has completed successfully. Dec 16 13:04:57.711839 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:04:57.711943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:04:57.742790 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:04:57.759640 sh[699]: Success Dec 16 13:04:57.790911 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:04:57.790953 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:04:57.791431 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:04:57.801422 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:04:58.190330 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:04:58.198428 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:04:58.212852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:04:58.239411 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (712) Dec 16 13:04:58.244474 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:04:58.244582 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:58.650801 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:04:58.650896 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:04:58.652145 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:04:58.696715 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:04:58.699911 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:58.704496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:04:58.707914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:04:58.725516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:04:58.745920 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (735) Dec 16 13:04:58.745958 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:58.747459 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:58.787175 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:58.787226 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:58.788758 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:58.794432 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:58.795198 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:04:58.801554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:04:58.818233 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:58.822765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:58.858674 systemd-networkd[881]: lo: Link UP Dec 16 13:04:58.858682 systemd-networkd[881]: lo: Gained carrier Dec 16 13:04:58.864581 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:58.860887 systemd-networkd[881]: Enumeration completed Dec 16 13:04:58.860968 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:58.873434 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:58.873642 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a4c0e53 eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:58.861339 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:58.861342 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:58.866313 systemd[1]: Reached target network.target - Network. Dec 16 13:04:58.874554 systemd-networkd[881]: enP30832s1: Link UP Dec 16 13:04:58.874619 systemd-networkd[881]: eth0: Link UP Dec 16 13:04:58.874702 systemd-networkd[881]: eth0: Gained carrier Dec 16 13:04:58.874712 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:58.882801 systemd-networkd[881]: enP30832s1: Gained carrier Dec 16 13:04:58.892999 systemd-networkd[881]: eth0: DHCPv4 address 10.200.0.33/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:00.066364 ignition[856]: Ignition 2.22.0 Dec 16 13:05:00.066386 ignition[856]: Stage: fetch-offline Dec 16 13:05:00.066522 ignition[856]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:00.066529 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:00.070942 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:05:00.066620 ignition[856]: parsed url from cmdline: "" Dec 16 13:05:00.066622 ignition[856]: no config URL provided Dec 16 13:05:00.066627 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:05:00.077588 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:05:00.066633 ignition[856]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:05:00.066638 ignition[856]: failed to fetch config: resource requires networking Dec 16 13:05:00.069586 ignition[856]: Ignition finished successfully Dec 16 13:05:00.103555 ignition[891]: Ignition 2.22.0 Dec 16 13:05:00.103566 ignition[891]: Stage: fetch Dec 16 13:05:00.103776 ignition[891]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:00.103784 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:00.103866 ignition[891]: parsed url from cmdline: "" Dec 16 13:05:00.103869 ignition[891]: no config URL provided Dec 16 13:05:00.103873 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:05:00.103879 ignition[891]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:05:00.103899 ignition[891]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:05:00.166628 ignition[891]: GET result: OK Dec 16 13:05:00.167228 ignition[891]: config has been read from IMDS userdata Dec 16 13:05:00.167262 ignition[891]: parsing config with SHA512: 42aa59a91a42a1ad983b659a28821d13416cb2aa995e9333b5d30cacc186a02cefa25721bdd9f8afa1b389da2aeec7f18037628de14e7e1cb16f85dcc17eee19 Dec 16 13:05:00.171892 unknown[891]: fetched base config from "system" Dec 16 13:05:00.171909 unknown[891]: fetched base config from "system" Dec 16 13:05:00.171915 unknown[891]: fetched user config from "azure" Dec 16 13:05:00.174688 ignition[891]: fetch: fetch complete Dec 16 13:05:00.174693 ignition[891]: fetch: fetch passed Dec 16 13:05:00.177202 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:05:00.174755 ignition[891]: Ignition finished successfully Dec 16 13:05:00.184527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:05:00.216199 ignition[897]: Ignition 2.22.0 Dec 16 13:05:00.216210 ignition[897]: Stage: kargs Dec 16 13:05:00.218939 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:05:00.216479 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:00.224998 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:05:00.216487 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:00.217417 ignition[897]: kargs: kargs passed Dec 16 13:05:00.217451 ignition[897]: Ignition finished successfully Dec 16 13:05:00.252616 ignition[904]: Ignition 2.22.0 Dec 16 13:05:00.252627 ignition[904]: Stage: disks Dec 16 13:05:00.255041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:05:00.252838 ignition[904]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:00.259378 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:05:00.252846 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:00.264181 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:05:00.253644 ignition[904]: disks: disks passed Dec 16 13:05:00.269022 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:05:00.253677 ignition[904]: Ignition finished successfully Dec 16 13:05:00.276388 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:05:00.276599 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:05:00.283634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:05:00.371550 systemd-fsck[913]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 13:05:00.376080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:05:00.388959 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:05:00.574594 systemd-networkd[881]: eth0: Gained IPv6LL Dec 16 13:05:00.681413 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:05:00.682273 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:05:00.684958 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:05:00.703252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:00.708484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:05:00.719789 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:05:00.726497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:05:00.726547 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:00.735234 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:05:00.747570 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (922) Dec 16 13:05:00.747597 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:00.747610 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:00.747621 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:05:00.747632 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:05:00.743197 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:05:00.756011 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:05:00.751583 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:01.274351 coreos-metadata[924]: Dec 16 13:05:01.274 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:05:01.277326 coreos-metadata[924]: Dec 16 13:05:01.277 INFO Fetch successful Dec 16 13:05:01.277326 coreos-metadata[924]: Dec 16 13:05:01.277 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:05:01.287946 coreos-metadata[924]: Dec 16 13:05:01.287 INFO Fetch successful Dec 16 13:05:01.313036 coreos-metadata[924]: Dec 16 13:05:01.313 INFO wrote hostname ci-4459.2.2-a-22a3eae3ac to /sysroot/etc/hostname Dec 16 13:05:01.315051 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:05:01.514562 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:05:01.553743 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:05:01.573263 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:05:01.577764 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:05:02.607979 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:02.614654 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:05:02.624520 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:05:02.630571 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:05:02.633425 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:02.665676 ignition[1042]: INFO : Ignition 2.22.0 Dec 16 13:05:02.665676 ignition[1042]: INFO : Stage: mount Dec 16 13:05:02.665676 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:02.665676 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:02.668518 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:05:02.682238 ignition[1042]: INFO : mount: mount passed Dec 16 13:05:02.682238 ignition[1042]: INFO : Ignition finished successfully Dec 16 13:05:02.672194 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:05:02.678034 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:05:02.693254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:05:02.717417 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1054) Dec 16 13:05:02.719406 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:05:02.719428 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:05:02.725883 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:05:02.725915 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:05:02.727220 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:05:02.729240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:05:02.758819 ignition[1071]: INFO : Ignition 2.22.0 Dec 16 13:05:02.758819 ignition[1071]: INFO : Stage: files Dec 16 13:05:02.762114 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:02.762114 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:02.762114 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:05:02.768754 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:05:02.768754 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:05:02.828994 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:05:02.833472 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:05:02.833472 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:05:02.829348 unknown[1071]: wrote ssh authorized keys file for user: core Dec 16 13:05:02.847854 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:05:02.852474 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:05:20.954886 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:05:21.060283 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:21.064063 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:05:21.087845 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:05:21.569448 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:05:21.786875 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:05:21.786875 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:05:21.859056 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:21.873316 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:05:21.873316 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:05:21.873316 ignition[1071]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:21.884510 ignition[1071]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:05:21.884510 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:21.884510 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:05:21.884510 ignition[1071]: INFO : files: files passed Dec 16 13:05:21.884510 ignition[1071]: INFO : Ignition finished successfully Dec 16 13:05:21.878894 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:05:21.883630 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:05:21.892520 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:05:21.920503 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:21.902659 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:05:21.926527 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:21.926527 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:05:21.902751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:05:21.914325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:21.920351 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:05:21.923279 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:05:21.963804 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:05:21.963903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:05:21.968772 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:05:21.973090 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:05:21.976624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:05:21.979013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:05:21.994072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:21.995889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:05:22.012408 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:22.012947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:22.018590 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:05:22.022564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:05:22.022691 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:05:22.025720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:05:22.030543 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:05:22.034551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:05:22.037402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:05:22.042548 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:05:22.045556 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:05:22.048533 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:05:22.054321 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:05:22.057856 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:05:22.061994 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:05:22.064210 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:05:22.068442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:05:22.070584 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:05:22.074388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:22.076504 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:22.078617 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:05:22.078737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:22.085773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:05:22.085892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:05:22.091498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:05:22.091642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:05:22.094602 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:05:22.094719 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:05:22.098573 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:05:22.098686 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:05:22.104502 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:05:22.104903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:05:22.105043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:22.110381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:05:22.119974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:05:22.121829 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:22.129184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:05:22.129306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:05:22.140184 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:05:22.140272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:05:22.148011 ignition[1125]: INFO : Ignition 2.22.0 Dec 16 13:05:22.148011 ignition[1125]: INFO : Stage: umount Dec 16 13:05:22.148011 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:05:22.148011 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:05:22.167589 ignition[1125]: INFO : umount: umount passed Dec 16 13:05:22.167589 ignition[1125]: INFO : Ignition finished successfully Dec 16 13:05:22.150161 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:05:22.150238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:05:22.151978 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:05:22.152057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:05:22.152897 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:05:22.152933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:05:22.153117 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:05:22.153145 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:05:22.153416 systemd[1]: Stopped target network.target - Network. Dec 16 13:05:22.153445 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:05:22.153480 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:05:22.153716 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:05:22.153740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:05:22.162530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:22.167957 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:05:22.171573 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:05:22.178979 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:05:22.179404 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:05:22.187287 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:05:22.187322 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:05:22.193474 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:05:22.193535 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:05:22.196145 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:05:22.196181 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:05:22.197210 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:05:22.197450 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:05:22.198737 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:05:22.211496 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:05:22.211591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:05:22.217638 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:05:22.217845 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:05:22.217935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:05:22.222986 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:05:22.223141 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:05:22.223210 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:05:22.225813 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:05:22.228305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:05:22.228335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:22.231463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:05:22.231510 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:05:22.236196 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:05:22.237921 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:05:22.237978 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:05:22.242668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:05:22.242719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:22.243323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:05:22.243357 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:22.243824 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:05:22.243854 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:22.245140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:22.246299 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:05:22.246375 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:22.265212 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:05:22.265332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:22.292139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:05:22.292191 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:22.297515 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:05:22.297546 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:22.300904 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:05:22.300954 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:05:22.304092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:05:22.304136 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:05:22.307028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:05:22.307073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:05:22.314015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:05:22.319470 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:05:22.319527 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:22.323229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:05:22.323271 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:22.328527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:22.328564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:22.336843 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:05:22.336895 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:05:22.336932 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:05:22.344794 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:05:22.344868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:05:22.364927 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a4c0e53 eth0: Data path switched from VF: enP30832s1 Dec 16 13:05:22.365070 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:05:22.367014 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:05:22.368139 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:05:22.372633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:05:22.375004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:05:22.402832 systemd[1]: Switching root. Dec 16 13:05:22.478245 systemd-journald[186]: Journal stopped Dec 16 13:05:27.073686 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 16 13:05:27.073716 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:05:27.073732 kernel: SELinux: policy capability open_perms=1 Dec 16 13:05:27.073742 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:05:27.073750 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:05:27.073759 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:05:27.073769 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:05:27.073779 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:05:27.073790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:05:27.073799 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:05:27.073808 kernel: audit: type=1403 audit(1765890324.127:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:05:27.073819 systemd[1]: Successfully loaded SELinux policy in 198.291ms. Dec 16 13:05:27.073830 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.128ms. Dec 16 13:05:27.073843 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:05:27.073858 systemd[1]: Detected virtualization microsoft. Dec 16 13:05:27.073868 systemd[1]: Detected architecture x86-64. Dec 16 13:05:27.073879 systemd[1]: Detected first boot. Dec 16 13:05:27.073889 systemd[1]: Hostname set to . Dec 16 13:05:27.073902 systemd[1]: Initializing machine ID from random generator. Dec 16 13:05:27.073913 zram_generator::config[1168]: No configuration found. Dec 16 13:05:27.073927 kernel: Guest personality initialized and is inactive Dec 16 13:05:27.073937 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:05:27.073947 kernel: Initialized host personality Dec 16 13:05:27.073957 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:05:27.073967 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:05:27.073979 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:05:27.073989 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:05:27.073999 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:05:27.074012 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:05:27.074021 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:05:27.074030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:05:27.074038 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:05:27.074048 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:05:27.074058 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:05:27.074069 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:05:27.074082 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:05:27.074092 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:05:27.074102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:05:27.074112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:05:27.074123 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:05:27.074137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:05:27.074148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:05:27.074159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:05:27.074172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:05:27.074183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:05:27.074194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:05:27.074204 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:05:27.074213 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:05:27.074223 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:05:27.074233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:05:27.074245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:05:27.074255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:05:27.074264 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:05:27.074274 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:05:27.074285 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:05:27.074295 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:05:27.074309 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:05:27.074319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:05:27.074330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:05:27.074341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:05:27.074352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:05:27.074362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:05:27.074373 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:05:27.074386 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:05:27.076758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:27.076783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:05:27.076795 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:05:27.076805 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:05:27.076817 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:05:27.076828 systemd[1]: Reached target machines.target - Containers. Dec 16 13:05:27.076840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:05:27.076852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:27.076868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:05:27.076879 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:05:27.076889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:27.076899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:05:27.076909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:27.076920 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:05:27.076930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:27.076941 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:05:27.076955 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:05:27.076966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:05:27.076977 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:05:27.076988 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:05:27.076999 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:27.077010 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:05:27.077020 kernel: loop: module loaded Dec 16 13:05:27.077030 kernel: fuse: init (API version 7.41) Dec 16 13:05:27.077043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:05:27.077082 systemd-journald[1258]: Collecting audit messages is disabled. Dec 16 13:05:27.077107 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:05:27.077120 systemd-journald[1258]: Journal started Dec 16 13:05:27.077148 systemd-journald[1258]: Runtime Journal (/run/log/journal/89f4047ba55c4eb39329b2e9dcb970f7) is 8M, max 158.6M, 150.6M free. Dec 16 13:05:26.686313 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:05:26.694999 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:05:26.695361 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:05:27.085411 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:05:27.090421 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:05:27.107011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:05:27.107057 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:05:27.110272 systemd[1]: Stopped verity-setup.service. Dec 16 13:05:27.117462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:27.121177 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:05:27.122856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:05:27.125520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:05:27.127161 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:05:27.130582 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:05:27.133543 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:05:27.134916 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:05:27.136232 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:05:27.139676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:05:27.142678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:05:27.142856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:05:27.145684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:27.145862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:27.147770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:27.147945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:27.150843 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:05:27.150990 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:05:27.154763 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:27.155028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:27.156960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:05:27.158813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:05:27.162772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:05:27.173697 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:05:27.181489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:05:27.187525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:05:27.190083 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:05:27.190115 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:05:27.194850 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:05:27.202852 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:05:27.204776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:27.205728 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:05:27.210565 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:05:27.213093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:05:27.216569 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:05:27.218530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:05:27.226778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:05:27.230763 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:05:27.237527 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:05:27.242074 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:05:27.245701 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:05:27.248096 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:05:27.266746 kernel: ACPI: bus type drm_connector registered Dec 16 13:05:27.269653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:05:27.273559 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:05:27.278664 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:05:27.281034 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:05:27.281636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:05:27.282523 systemd-journald[1258]: Time spent on flushing to /var/log/journal/89f4047ba55c4eb39329b2e9dcb970f7 is 48.720ms for 994 entries. Dec 16 13:05:27.282523 systemd-journald[1258]: System Journal (/var/log/journal/89f4047ba55c4eb39329b2e9dcb970f7) is 11.8M, max 2.6G, 2.6G free. Dec 16 13:05:27.415945 systemd-journald[1258]: Received client request to flush runtime journal. Dec 16 13:05:27.415993 systemd-journald[1258]: /var/log/journal/89f4047ba55c4eb39329b2e9dcb970f7/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 16 13:05:27.416012 systemd-journald[1258]: Rotating system journal. Dec 16 13:05:27.416027 kernel: loop0: detected capacity change from 0 to 27936 Dec 16 13:05:27.287220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:05:27.382557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:27.417226 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:05:27.426592 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:05:27.478649 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:05:27.483519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:05:27.552133 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:05:27.552151 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:05:27.554877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:05:27.696735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:05:27.838423 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:05:27.913424 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:05:27.953415 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:05:27.956540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:05:27.986037 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 16 13:05:28.221177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:05:28.226551 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:05:28.300613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:05:28.326365 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:05:28.376766 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:05:28.393419 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:05:28.398942 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:05:28.399002 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:05:28.402076 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:05:28.402126 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:05:28.407800 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:05:28.407785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:05:28.408638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:05:28.422432 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:05:28.431020 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:05:28.595682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:28.611305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:28.611565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:28.616506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:28.619566 systemd-networkd[1337]: lo: Link UP Dec 16 13:05:28.619570 systemd-networkd[1337]: lo: Gained carrier Dec 16 13:05:28.622885 systemd-networkd[1337]: Enumeration completed Dec 16 13:05:28.622958 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:05:28.626124 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:28.626271 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:05:28.630188 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:28.636407 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:05:28.633706 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:05:28.642295 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:05:28.645857 systemd-networkd[1337]: enP30832s1: Link UP Dec 16 13:05:28.645933 systemd-networkd[1337]: eth0: Link UP Dec 16 13:05:28.645936 systemd-networkd[1337]: eth0: Gained carrier Dec 16 13:05:28.645952 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:28.646406 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a4c0e53 eth0: Data path switched to VF: enP30832s1 Dec 16 13:05:28.649683 systemd-networkd[1337]: enP30832s1: Gained carrier Dec 16 13:05:28.650210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:05:28.650424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:28.657574 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.0.33/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:28.660588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:05:28.703496 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:05:28.739151 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:05:28.744703 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:05:28.749421 kernel: loop3: detected capacity change from 0 to 219144 Dec 16 13:05:28.779141 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:05:28.806415 kernel: loop4: detected capacity change from 0 to 27936 Dec 16 13:05:28.813439 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:05:28.821420 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:05:28.837411 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 13:05:28.888413 kernel: loop7: detected capacity change from 0 to 219144 Dec 16 13:05:28.898091 (sd-merge)[1430]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 13:05:28.898519 (sd-merge)[1430]: Merged extensions into '/usr'. Dec 16 13:05:28.901594 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:05:28.901608 systemd[1]: Reloading... Dec 16 13:05:28.957423 zram_generator::config[1461]: No configuration found. Dec 16 13:05:29.158176 systemd[1]: Reloading finished in 256 ms. Dec 16 13:05:29.179756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:05:29.182239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:05:29.194473 systemd[1]: Starting ensure-sysext.service... Dec 16 13:05:29.198581 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:05:29.222005 systemd[1]: Reload requested from client PID 1521 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:05:29.222025 systemd[1]: Reloading... Dec 16 13:05:29.222826 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:05:29.222851 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:05:29.223081 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:05:29.223304 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:05:29.224350 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:05:29.224671 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Dec 16 13:05:29.224781 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Dec 16 13:05:29.251184 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:05:29.251199 systemd-tmpfiles[1522]: Skipping /boot Dec 16 13:05:29.264376 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:05:29.264410 systemd-tmpfiles[1522]: Skipping /boot Dec 16 13:05:29.274412 zram_generator::config[1549]: No configuration found. Dec 16 13:05:29.468223 systemd[1]: Reloading finished in 245 ms. Dec 16 13:05:29.496877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:05:29.505112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:05:29.515173 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:05:29.521142 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:05:29.526090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:05:29.535255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:05:29.542052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.542208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:29.546691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:29.551749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:29.555980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:29.559564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:29.559682 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:29.559773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.564409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:29.565580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:29.573134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:29.573456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:29.577933 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:29.578142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:29.585913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.586675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:29.588060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:29.591613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:05:29.595085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:05:29.597045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:29.597168 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:29.597269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.601255 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:05:29.606266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:29.606937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:29.617938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:05:29.618100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:05:29.622889 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:05:29.623057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:05:29.630198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.631080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:05:29.633024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:05:29.639586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:05:29.642060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:05:29.642099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:05:29.642138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:05:29.642184 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:05:29.647531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:05:29.649831 systemd[1]: Finished ensure-sysext.service. Dec 16 13:05:29.652972 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:05:29.656727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:05:29.656866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:05:29.658579 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:05:29.658703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:05:29.663942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:05:29.683572 systemd-resolved[1616]: Positive Trust Anchors: Dec 16 13:05:29.683586 systemd-resolved[1616]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:05:29.683619 systemd-resolved[1616]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:05:29.710650 systemd-resolved[1616]: Using system hostname 'ci-4459.2.2-a-22a3eae3ac'. Dec 16 13:05:29.712108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:05:29.713945 systemd[1]: Reached target network.target - Network. Dec 16 13:05:29.716473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:05:29.729639 augenrules[1657]: No rules Dec 16 13:05:29.730620 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:05:29.730896 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:05:29.822546 systemd-networkd[1337]: eth0: Gained IPv6LL Dec 16 13:05:29.824344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:05:29.826532 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:05:30.439040 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:05:30.441124 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:05:33.495571 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:05:33.506173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:05:33.511629 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:05:33.530332 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:05:33.533660 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:05:33.536579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:05:33.539479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:05:33.542444 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:05:33.544026 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:05:33.545351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:05:33.548445 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:05:33.549949 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:05:33.549980 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:05:33.552438 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:05:33.556444 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:05:33.560385 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:05:33.564027 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:05:33.567587 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:05:33.570533 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:05:33.578898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:05:33.580895 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:05:33.583031 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:05:33.586161 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:05:33.596952 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:05:33.598142 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:05:33.598163 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:05:33.600060 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:05:33.602895 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:05:33.616549 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:05:33.619525 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:05:33.625527 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:05:33.631545 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:05:33.635501 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:05:33.637829 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:05:33.639066 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:05:33.641100 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:05:33.648512 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:05:33.650369 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:05:33.652152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:33.657576 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:05:33.664507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:05:33.667990 jq[1675]: false Dec 16 13:05:33.670549 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:05:33.676140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:05:33.680062 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:05:33.684978 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:05:33.688811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:05:33.690566 extend-filesystems[1679]: Found /dev/nvme0n1p6 Dec 16 13:05:33.695924 KVP[1681]: KVP starting; pid is:1681 Dec 16 13:05:33.693620 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:05:33.697627 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:05:33.706389 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:05:33.703086 KVP[1681]: KVP LIC Version: 3.1 Dec 16 13:05:33.703287 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:05:33.711005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:05:33.713603 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:05:33.714529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:05:33.716549 extend-filesystems[1679]: Found /dev/nvme0n1p9 Dec 16 13:05:33.719500 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Refreshing passwd entry cache Dec 16 13:05:33.723144 oslogin_cache_refresh[1680]: Refreshing passwd entry cache Dec 16 13:05:33.725010 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:05:33.725588 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:05:33.737317 extend-filesystems[1679]: Checking size of /dev/nvme0n1p9 Dec 16 13:05:33.745701 jq[1694]: true Dec 16 13:05:33.751269 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Failure getting users, quitting Dec 16 13:05:33.751269 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:05:33.751269 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Refreshing group entry cache Dec 16 13:05:33.750861 oslogin_cache_refresh[1680]: Failure getting users, quitting Dec 16 13:05:33.750878 oslogin_cache_refresh[1680]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:05:33.750919 oslogin_cache_refresh[1680]: Refreshing group entry cache Dec 16 13:05:33.757146 (ntainerd)[1711]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:05:33.766866 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Failure getting groups, quitting Dec 16 13:05:33.766866 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:05:33.765407 oslogin_cache_refresh[1680]: Failure getting groups, quitting Dec 16 13:05:33.765424 oslogin_cache_refresh[1680]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:05:33.768077 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:05:33.768282 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:05:33.778777 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:05:33.778354 chronyd[1670]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:05:33.779007 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:05:33.797572 extend-filesystems[1679]: Old size kept for /dev/nvme0n1p9 Dec 16 13:05:33.801939 jq[1715]: true Dec 16 13:05:33.803750 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:05:33.803982 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:05:33.830095 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:05:33.835338 update_engine[1693]: I20251216 13:05:33.835255 1693 main.cc:92] Flatcar Update Engine starting Dec 16 13:05:33.835546 chronyd[1670]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:05:33.835692 chronyd[1670]: Loaded seccomp filter (level 2) Dec 16 13:05:33.835823 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:05:33.844277 tar[1701]: linux-amd64/LICENSE Dec 16 13:05:33.844556 tar[1701]: linux-amd64/helm Dec 16 13:05:33.863418 systemd-logind[1691]: New seat seat0. Dec 16 13:05:33.864795 systemd-logind[1691]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 13:05:33.864936 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:05:33.926815 dbus-daemon[1673]: [system] SELinux support is enabled Dec 16 13:05:33.926944 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:05:33.931815 bash[1751]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:05:33.932295 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:05:33.936119 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:05:33.936209 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:05:33.936244 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:05:33.941556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:05:33.941579 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:05:33.947659 dbus-daemon[1673]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:05:33.948603 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:05:33.949436 update_engine[1693]: I20251216 13:05:33.948407 1693 update_check_scheduler.cc:74] Next update check in 8m14s Dec 16 13:05:33.952429 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:05:34.043782 sshd_keygen[1721]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:05:34.084766 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:05:34.094627 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:05:34.100694 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:05:34.126697 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:05:34.126898 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:05:34.133007 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:05:34.152739 coreos-metadata[1672]: Dec 16 13:05:34.152 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:05:34.160251 coreos-metadata[1672]: Dec 16 13:05:34.160 INFO Fetch successful Dec 16 13:05:34.160251 coreos-metadata[1672]: Dec 16 13:05:34.160 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:05:34.164095 coreos-metadata[1672]: Dec 16 13:05:34.163 INFO Fetch successful Dec 16 13:05:34.164095 coreos-metadata[1672]: Dec 16 13:05:34.164 INFO Fetching http://168.63.129.16/machine/0278dcb9-5d5c-449f-8f1d-f44016b6d7bd/d6cf19a4%2D475f%2D4801%2D9c86%2Dadc9c5db46fd.%5Fci%2D4459.2.2%2Da%2D22a3eae3ac?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:05:34.168296 coreos-metadata[1672]: Dec 16 13:05:34.167 INFO Fetch successful Dec 16 13:05:34.168296 coreos-metadata[1672]: Dec 16 13:05:34.168 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:05:34.173483 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:05:34.179289 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:05:34.183491 coreos-metadata[1672]: Dec 16 13:05:34.183 INFO Fetch successful Dec 16 13:05:34.186594 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:05:34.192535 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:05:34.194802 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:05:34.228831 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:05:34.230720 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:05:34.264524 locksmithd[1768]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:05:34.432118 tar[1701]: linux-amd64/README.md Dec 16 13:05:34.447938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:05:34.969572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:34.973470 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:35.095429 containerd[1711]: time="2025-12-16T13:05:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:05:35.096222 containerd[1711]: time="2025-12-16T13:05:35.096193771Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:05:35.107892 containerd[1711]: time="2025-12-16T13:05:35.107860095Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.582µs" Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108430235Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108460710Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108593146Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108606159Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108630622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108681835Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108692596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108910311Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108921603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108932031Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:05:35.108943 containerd[1711]: time="2025-12-16T13:05:35.108940285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:05:35.109181 containerd[1711]: time="2025-12-16T13:05:35.108991998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:05:35.109181 containerd[1711]: time="2025-12-16T13:05:35.109158507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:05:35.109218 containerd[1711]: time="2025-12-16T13:05:35.109179329Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:05:35.109218 containerd[1711]: time="2025-12-16T13:05:35.109190031Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:05:35.109262 containerd[1711]: time="2025-12-16T13:05:35.109235054Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:05:35.110420 containerd[1711]: time="2025-12-16T13:05:35.109518847Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:05:35.110420 containerd[1711]: time="2025-12-16T13:05:35.109595151Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:05:35.126144 containerd[1711]: time="2025-12-16T13:05:35.126110563Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:05:35.126413 containerd[1711]: time="2025-12-16T13:05:35.126284848Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:05:35.126413 containerd[1711]: time="2025-12-16T13:05:35.126378312Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:05:35.126472 containerd[1711]: time="2025-12-16T13:05:35.126464750Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:05:35.126501 containerd[1711]: time="2025-12-16T13:05:35.126495388Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:05:35.126676 containerd[1711]: time="2025-12-16T13:05:35.126524348Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:05:35.126676 containerd[1711]: time="2025-12-16T13:05:35.126612210Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:05:35.126676 containerd[1711]: time="2025-12-16T13:05:35.126624490Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:05:35.126676 containerd[1711]: time="2025-12-16T13:05:35.126637108Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:05:35.126676 containerd[1711]: time="2025-12-16T13:05:35.126653499Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:05:35.126812 containerd[1711]: time="2025-12-16T13:05:35.126802898Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:05:35.126858 containerd[1711]: time="2025-12-16T13:05:35.126849498Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:05:35.127031 containerd[1711]: time="2025-12-16T13:05:35.127010425Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:05:35.127079 containerd[1711]: time="2025-12-16T13:05:35.127070131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:05:35.127125 containerd[1711]: time="2025-12-16T13:05:35.127117627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:05:35.127236 containerd[1711]: time="2025-12-16T13:05:35.127171582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:05:35.127236 containerd[1711]: time="2025-12-16T13:05:35.127184174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:05:35.127236 containerd[1711]: time="2025-12-16T13:05:35.127194665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:05:35.127236 containerd[1711]: time="2025-12-16T13:05:35.127205402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:05:35.127236 containerd[1711]: time="2025-12-16T13:05:35.127216636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:05:35.127405 containerd[1711]: time="2025-12-16T13:05:35.127227871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:05:35.127405 containerd[1711]: time="2025-12-16T13:05:35.127362903Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:05:35.127405 containerd[1711]: time="2025-12-16T13:05:35.127374487Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:05:35.128427 containerd[1711]: time="2025-12-16T13:05:35.127672654Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:05:35.128427 containerd[1711]: time="2025-12-16T13:05:35.127693169Z" level=info msg="Start snapshots syncer" Dec 16 13:05:35.128427 containerd[1711]: time="2025-12-16T13:05:35.127721277Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128004450Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128059769Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128112622Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128202888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128221728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128232318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128242892Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128256494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128267146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128278999Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128310969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128326180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128337334Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128358452Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128373402Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:05:35.128519 containerd[1711]: time="2025-12-16T13:05:35.128382726Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129435100Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129463770Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129480609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129513044Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129531316Z" level=info msg="runtime interface created" Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129537741Z" level=info msg="created NRI interface" Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129547022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129559321Z" level=info msg="Connect containerd service" Dec 16 13:05:35.130322 containerd[1711]: time="2025-12-16T13:05:35.129609961Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:05:35.130567 containerd[1711]: time="2025-12-16T13:05:35.130533148Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:05:35.515443 kubelet[1820]: E1216 13:05:35.514731 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:35.517541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:35.517677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:35.518013 systemd[1]: kubelet.service: Consumed 877ms CPU time, 255.2M memory peak. Dec 16 13:05:35.703327 containerd[1711]: time="2025-12-16T13:05:35.703006766Z" level=info msg="Start subscribing containerd event" Dec 16 13:05:35.703327 containerd[1711]: time="2025-12-16T13:05:35.703222599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:05:35.703327 containerd[1711]: time="2025-12-16T13:05:35.703237395Z" level=info msg="Start recovering state" Dec 16 13:05:35.703634 containerd[1711]: time="2025-12-16T13:05:35.703622690Z" level=info msg="Start event monitor" Dec 16 13:05:35.703685 containerd[1711]: time="2025-12-16T13:05:35.703677330Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:05:35.703720 containerd[1711]: time="2025-12-16T13:05:35.703713504Z" level=info msg="Start streaming server" Dec 16 13:05:35.703757 containerd[1711]: time="2025-12-16T13:05:35.703750444Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:05:35.705282 containerd[1711]: time="2025-12-16T13:05:35.703784934Z" level=info msg="runtime interface starting up..." Dec 16 13:05:35.705282 containerd[1711]: time="2025-12-16T13:05:35.703792279Z" level=info msg="starting plugins..." Dec 16 13:05:35.705282 containerd[1711]: time="2025-12-16T13:05:35.703805383Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:05:35.705282 containerd[1711]: time="2025-12-16T13:05:35.703271008Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:05:35.705282 containerd[1711]: time="2025-12-16T13:05:35.703939411Z" level=info msg="containerd successfully booted in 0.609060s" Dec 16 13:05:35.704213 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:05:35.707476 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:05:35.711530 systemd[1]: Startup finished in 3.192s (kernel) + 29.451s (initrd) + 11.780s (userspace) = 44.425s. Dec 16 13:05:35.986849 login[1796]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:05:35.990208 login[1797]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:05:35.997264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:05:35.998632 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:05:36.008560 systemd-logind[1691]: New session 2 of user core. Dec 16 13:05:36.011928 systemd-logind[1691]: New session 1 of user core. Dec 16 13:05:36.037826 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:05:36.041317 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:05:36.054072 (systemd)[1849]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:05:36.056307 systemd-logind[1691]: New session c1 of user core. Dec 16 13:05:36.062856 waagent[1794]: 2025-12-16T13:05:36.062779Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:05:36.065217 waagent[1794]: 2025-12-16T13:05:36.065157Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 13:05:36.066950 waagent[1794]: 2025-12-16T13:05:36.066895Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:05:36.068388 waagent[1794]: 2025-12-16T13:05:36.068336Z INFO Daemon Daemon Run daemon Dec 16 13:05:36.068821 waagent[1794]: 2025-12-16T13:05:36.068789Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 13:05:36.068897 waagent[1794]: 2025-12-16T13:05:36.068876Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:05:36.069063 waagent[1794]: 2025-12-16T13:05:36.069041Z INFO Daemon Daemon Activate resource disk Dec 16 13:05:36.069132 waagent[1794]: 2025-12-16T13:05:36.069113Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:05:36.076943 waagent[1794]: 2025-12-16T13:05:36.076896Z INFO Daemon Daemon Found device: None Dec 16 13:05:36.078120 waagent[1794]: 2025-12-16T13:05:36.077922Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:05:36.079521 waagent[1794]: 2025-12-16T13:05:36.079488Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:05:36.083253 waagent[1794]: 2025-12-16T13:05:36.083211Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:36.084803 waagent[1794]: 2025-12-16T13:05:36.084774Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:05:36.093413 waagent[1794]: 2025-12-16T13:05:36.091607Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:05:36.096741 waagent[1794]: 2025-12-16T13:05:36.096701Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:05:36.097422 waagent[1794]: 2025-12-16T13:05:36.097368Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:05:36.097640 waagent[1794]: 2025-12-16T13:05:36.097618Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:05:36.153225 waagent[1794]: 2025-12-16T13:05:36.150487Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:05:36.183713 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:05:36.186270 waagent[1794]: 2025-12-16T13:05:36.186214Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:05:36.188350 waagent[1794]: 2025-12-16T13:05:36.188312Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:36.190717 waagent[1794]: 2025-12-16T13:05:36.190684Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:05:36.193429 waagent[1794]: 2025-12-16T13:05:36.193336Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:05:36.195661 waagent[1794]: 2025-12-16T13:05:36.195631Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:05:36.197835 waagent[1794]: 2025-12-16T13:05:36.197804Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:05:36.213414 waagent[1794]: 2025-12-16T13:05:36.212800Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:05:36.215503 waagent[1794]: 2025-12-16T13:05:36.215481Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:05:36.217634 waagent[1794]: 2025-12-16T13:05:36.217604Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:05:36.258187 systemd[1849]: Queued start job for default target default.target. Dec 16 13:05:36.267146 systemd[1849]: Created slice app.slice - User Application Slice. Dec 16 13:05:36.267180 systemd[1849]: Reached target paths.target - Paths. Dec 16 13:05:36.267214 systemd[1849]: Reached target timers.target - Timers. Dec 16 13:05:36.268494 systemd[1849]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:05:36.280763 systemd[1849]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:05:36.280860 systemd[1849]: Reached target sockets.target - Sockets. Dec 16 13:05:36.280951 systemd[1849]: Reached target basic.target - Basic System. Dec 16 13:05:36.281005 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:05:36.281497 systemd[1849]: Reached target default.target - Main User Target. Dec 16 13:05:36.281527 systemd[1849]: Startup finished in 218ms. Dec 16 13:05:36.286525 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:05:36.287139 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:05:36.310040 waagent[1794]: 2025-12-16T13:05:36.309961Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:05:36.311485 waagent[1794]: 2025-12-16T13:05:36.311264Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:05:36.319319 waagent[1794]: 2025-12-16T13:05:36.319273Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:36.338147 waagent[1794]: 2025-12-16T13:05:36.337063Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:05:36.338147 waagent[1794]: 2025-12-16T13:05:36.337962Z INFO Daemon Dec 16 13:05:36.338423 waagent[1794]: 2025-12-16T13:05:36.338180Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f89ef65a-1fa5-4a84-8d59-7ebcb7d1079c eTag: 11261418471652412212 source: Fabric] Dec 16 13:05:36.344450 waagent[1794]: 2025-12-16T13:05:36.342202Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:36.345462 waagent[1794]: 2025-12-16T13:05:36.345052Z INFO Daemon Dec 16 13:05:36.347938 waagent[1794]: 2025-12-16T13:05:36.346708Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:36.352197 waagent[1794]: 2025-12-16T13:05:36.352165Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:05:36.433358 waagent[1794]: 2025-12-16T13:05:36.433302Z INFO Daemon Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:05:36.435862 waagent[1794]: 2025-12-16T13:05:36.434031Z INFO Daemon Fetch goal state completed Dec 16 13:05:36.447832 waagent[1794]: 2025-12-16T13:05:36.447793Z INFO Daemon Daemon Starting provisioning Dec 16 13:05:36.448236 waagent[1794]: 2025-12-16T13:05:36.448203Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:05:36.449095 waagent[1794]: 2025-12-16T13:05:36.448283Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-22a3eae3ac] Dec 16 13:05:36.451846 waagent[1794]: 2025-12-16T13:05:36.451805Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-22a3eae3ac] Dec 16 13:05:36.452593 waagent[1794]: 2025-12-16T13:05:36.452162Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:05:36.452593 waagent[1794]: 2025-12-16T13:05:36.452513Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:05:36.461845 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:36.461852 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:36.461875 systemd-networkd[1337]: eth0: DHCP lease lost Dec 16 13:05:36.462767 waagent[1794]: 2025-12-16T13:05:36.462721Z INFO Daemon Daemon Create user account if not exists Dec 16 13:05:36.463786 waagent[1794]: 2025-12-16T13:05:36.463049Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:05:36.463786 waagent[1794]: 2025-12-16T13:05:36.463260Z INFO Daemon Daemon Configure sudoer Dec 16 13:05:36.469045 waagent[1794]: 2025-12-16T13:05:36.468993Z INFO Daemon Daemon Configure sshd Dec 16 13:05:36.473609 waagent[1794]: 2025-12-16T13:05:36.473568Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:05:36.476509 waagent[1794]: 2025-12-16T13:05:36.476270Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:05:36.477448 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.0.33/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:37.568345 waagent[1794]: 2025-12-16T13:05:37.568275Z INFO Daemon Daemon Provisioning complete Dec 16 13:05:37.586249 waagent[1794]: 2025-12-16T13:05:37.586213Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:05:37.586912 waagent[1794]: 2025-12-16T13:05:37.586646Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:05:37.586975 waagent[1794]: 2025-12-16T13:05:37.586951Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:05:37.693412 waagent[1897]: 2025-12-16T13:05:37.693338Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:05:37.693711 waagent[1897]: 2025-12-16T13:05:37.693463Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 13:05:37.693711 waagent[1897]: 2025-12-16T13:05:37.693507Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:05:37.693711 waagent[1897]: 2025-12-16T13:05:37.693552Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:05:37.738075 waagent[1897]: 2025-12-16T13:05:37.738016Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:05:37.738218 waagent[1897]: 2025-12-16T13:05:37.738192Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:37.738280 waagent[1897]: 2025-12-16T13:05:37.738250Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:37.744554 waagent[1897]: 2025-12-16T13:05:37.744501Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:37.757032 waagent[1897]: 2025-12-16T13:05:37.757001Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:05:37.757361 waagent[1897]: 2025-12-16T13:05:37.757329Z INFO ExtHandler Dec 16 13:05:37.757433 waagent[1897]: 2025-12-16T13:05:37.757383Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0db4925f-a577-4070-b3f5-d3a8abecf519 eTag: 11261418471652412212 source: Fabric] Dec 16 13:05:37.757633 waagent[1897]: 2025-12-16T13:05:37.757610Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:37.757956 waagent[1897]: 2025-12-16T13:05:37.757932Z INFO ExtHandler Dec 16 13:05:37.757989 waagent[1897]: 2025-12-16T13:05:37.757972Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:37.764036 waagent[1897]: 2025-12-16T13:05:37.764008Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:05:37.822604 waagent[1897]: 2025-12-16T13:05:37.822519Z INFO ExtHandler Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:05:37.822933 waagent[1897]: 2025-12-16T13:05:37.822902Z INFO ExtHandler Fetch goal state completed Dec 16 13:05:37.836510 waagent[1897]: 2025-12-16T13:05:37.836463Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 13:05:37.840609 waagent[1897]: 2025-12-16T13:05:37.840563Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1897 Dec 16 13:05:37.840728 waagent[1897]: 2025-12-16T13:05:37.840704Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:05:37.840982 waagent[1897]: 2025-12-16T13:05:37.840958Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:05:37.842035 waagent[1897]: 2025-12-16T13:05:37.842003Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:05:37.842316 waagent[1897]: 2025-12-16T13:05:37.842290Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:05:37.842450 waagent[1897]: 2025-12-16T13:05:37.842420Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:05:37.842855 waagent[1897]: 2025-12-16T13:05:37.842826Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:05:37.880364 waagent[1897]: 2025-12-16T13:05:37.880337Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:05:37.880509 waagent[1897]: 2025-12-16T13:05:37.880487Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:05:37.886136 waagent[1897]: 2025-12-16T13:05:37.885775Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:05:37.890964 systemd[1]: Reload requested from client PID 1912 ('systemctl') (unit waagent.service)... Dec 16 13:05:37.890978 systemd[1]: Reloading... Dec 16 13:05:37.963419 zram_generator::config[1951]: No configuration found. Dec 16 13:05:38.138116 systemd[1]: Reloading finished in 246 ms. Dec 16 13:05:38.156455 waagent[1897]: 2025-12-16T13:05:38.155606Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:05:38.156455 waagent[1897]: 2025-12-16T13:05:38.155752Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:05:38.216917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:05:38.672493 waagent[1897]: 2025-12-16T13:05:38.672382Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:05:38.672770 waagent[1897]: 2025-12-16T13:05:38.672743Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:05:38.673382 waagent[1897]: 2025-12-16T13:05:38.673349Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:05:38.673703 waagent[1897]: 2025-12-16T13:05:38.673676Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:05:38.673912 waagent[1897]: 2025-12-16T13:05:38.673854Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:05:38.673961 waagent[1897]: 2025-12-16T13:05:38.673917Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:05:38.674225 waagent[1897]: 2025-12-16T13:05:38.674173Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:05:38.674431 waagent[1897]: 2025-12-16T13:05:38.674373Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:38.674471 waagent[1897]: 2025-12-16T13:05:38.674446Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:38.674528 waagent[1897]: 2025-12-16T13:05:38.674500Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:38.674669 waagent[1897]: 2025-12-16T13:05:38.674650Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:05:38.674713 waagent[1897]: 2025-12-16T13:05:38.674693Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:05:38.674743 waagent[1897]: 2025-12-16T13:05:38.674719Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:05:38.674876 waagent[1897]: 2025-12-16T13:05:38.674858Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:05:38.675085 waagent[1897]: 2025-12-16T13:05:38.675065Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:05:38.675655 waagent[1897]: 2025-12-16T13:05:38.675568Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:38.675860 waagent[1897]: 2025-12-16T13:05:38.675840Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:05:38.677518 waagent[1897]: 2025-12-16T13:05:38.677479Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:05:38.677518 waagent[1897]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:05:38.677518 waagent[1897]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:05:38.677518 waagent[1897]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:05:38.677518 waagent[1897]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:38.677518 waagent[1897]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:38.677518 waagent[1897]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:38.685421 waagent[1897]: 2025-12-16T13:05:38.684299Z INFO ExtHandler ExtHandler Dec 16 13:05:38.685421 waagent[1897]: 2025-12-16T13:05:38.684365Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a6a74020-6898-484b-9ad7-f1c4f8cc7eae correlation a9ff4e37-bd99-4d77-9280-4193e5c626f4 created: 2025-12-16T13:04:20.555305Z] Dec 16 13:05:38.685421 waagent[1897]: 2025-12-16T13:05:38.684705Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:05:38.685421 waagent[1897]: 2025-12-16T13:05:38.685225Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:05:38.731018 waagent[1897]: 2025-12-16T13:05:38.730974Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:05:38.731018 waagent[1897]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:05:38.731369 waagent[1897]: 2025-12-16T13:05:38.731339Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8C97FE8C-89F1-4DB3-B0FF-00A4183BEA35;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:05:38.752578 waagent[1897]: 2025-12-16T13:05:38.752531Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:05:38.752578 waagent[1897]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:05:38.752578 waagent[1897]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:05:38.752578 waagent[1897]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:4c:0e:53 brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 16 13:05:38.752578 waagent[1897]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:4c:0e:53 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:05:38.752578 waagent[1897]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:05:38.752578 waagent[1897]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:05:38.752578 waagent[1897]: 2: eth0 inet 10.200.0.33/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:05:38.752578 waagent[1897]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:05:38.752578 waagent[1897]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:05:38.752578 waagent[1897]: 2: eth0 inet6 fe80::20d:3aff:fe4c:e53/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:05:38.782920 waagent[1897]: 2025-12-16T13:05:38.782873Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:05:38.782920 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.782920 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.782920 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.782920 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.782920 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.782920 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.782920 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:38.782920 waagent[1897]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:38.782920 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:38.785899 waagent[1897]: 2025-12-16T13:05:38.785854Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:05:38.785899 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.785899 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.785899 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.785899 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.785899 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:38.785899 waagent[1897]: pkts bytes target prot opt in out source destination Dec 16 13:05:38.785899 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:38.785899 waagent[1897]: 9 1052 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:38.785899 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:45.684520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:05:45.685966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:46.102657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:46.108597 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:46.150696 kubelet[2049]: E1216 13:05:46.150657 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:46.153605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:46.153732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:46.154014 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.7M memory peak. Dec 16 13:05:56.184538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:05:56.186011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:56.644497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:56.653615 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:56.686277 kubelet[2064]: E1216 13:05:56.686205 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:56.687928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:56.688073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:56.688412 systemd[1]: kubelet.service: Consumed 128ms CPU time, 109.9M memory peak. Dec 16 13:05:57.628217 chronyd[1670]: Selected source PHC0 Dec 16 13:06:06.934535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:06:06.935989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:07.378409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:07.381451 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:07.417300 kubelet[2080]: E1216 13:06:07.417256 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:07.418962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:07.419097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:07.419447 systemd[1]: kubelet.service: Consumed 130ms CPU time, 110.3M memory peak. Dec 16 13:06:07.434653 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:06:07.435753 systemd[1]: Started sshd@0-10.200.0.33:22-10.200.16.10:55350.service - OpenSSH per-connection server daemon (10.200.16.10:55350). Dec 16 13:06:08.106360 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 55350 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:08.107490 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.111862 systemd-logind[1691]: New session 3 of user core. Dec 16 13:06:08.117559 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:06:08.599197 systemd[1]: Started sshd@1-10.200.0.33:22-10.200.16.10:55352.service - OpenSSH per-connection server daemon (10.200.16.10:55352). Dec 16 13:06:09.152160 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 55352 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:09.153270 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:09.157517 systemd-logind[1691]: New session 4 of user core. Dec 16 13:06:09.163532 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:06:09.543883 sshd[2097]: Connection closed by 10.200.16.10 port 55352 Dec 16 13:06:09.544538 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:09.547390 systemd[1]: sshd@1-10.200.0.33:22-10.200.16.10:55352.service: Deactivated successfully. Dec 16 13:06:09.549671 systemd-logind[1691]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:06:09.549770 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:06:09.551518 systemd-logind[1691]: Removed session 4. Dec 16 13:06:09.640937 systemd[1]: Started sshd@2-10.200.0.33:22-10.200.16.10:55364.service - OpenSSH per-connection server daemon (10.200.16.10:55364). Dec 16 13:06:10.198764 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 55364 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:10.199955 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:10.204143 systemd-logind[1691]: New session 5 of user core. Dec 16 13:06:10.209542 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:06:10.586068 sshd[2106]: Connection closed by 10.200.16.10 port 55364 Dec 16 13:06:10.586645 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:10.590157 systemd[1]: sshd@2-10.200.0.33:22-10.200.16.10:55364.service: Deactivated successfully. Dec 16 13:06:10.591713 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:06:10.592448 systemd-logind[1691]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:06:10.593698 systemd-logind[1691]: Removed session 5. Dec 16 13:06:10.684089 systemd[1]: Started sshd@3-10.200.0.33:22-10.200.16.10:37396.service - OpenSSH per-connection server daemon (10.200.16.10:37396). Dec 16 13:06:11.239199 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 37396 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:11.240326 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:11.244556 systemd-logind[1691]: New session 6 of user core. Dec 16 13:06:11.249535 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:06:11.628622 sshd[2115]: Connection closed by 10.200.16.10 port 37396 Dec 16 13:06:11.629277 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:11.632591 systemd[1]: sshd@3-10.200.0.33:22-10.200.16.10:37396.service: Deactivated successfully. Dec 16 13:06:11.634067 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:06:11.634810 systemd-logind[1691]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:06:11.635872 systemd-logind[1691]: Removed session 6. Dec 16 13:06:11.728158 systemd[1]: Started sshd@4-10.200.0.33:22-10.200.16.10:37404.service - OpenSSH per-connection server daemon (10.200.16.10:37404). Dec 16 13:06:12.284082 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 37404 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:12.285207 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:12.289492 systemd-logind[1691]: New session 7 of user core. Dec 16 13:06:12.296532 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:06:12.718906 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:06:12.719144 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:12.749213 sudo[2125]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:12.836559 sshd[2124]: Connection closed by 10.200.16.10 port 37404 Dec 16 13:06:12.837231 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:12.840363 systemd[1]: sshd@4-10.200.0.33:22-10.200.16.10:37404.service: Deactivated successfully. Dec 16 13:06:12.841895 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:06:12.843285 systemd-logind[1691]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:06:12.844555 systemd-logind[1691]: Removed session 7. Dec 16 13:06:12.949121 systemd[1]: Started sshd@5-10.200.0.33:22-10.200.16.10:37420.service - OpenSSH per-connection server daemon (10.200.16.10:37420). Dec 16 13:06:13.501862 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 37420 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:13.503069 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:13.507349 systemd-logind[1691]: New session 8 of user core. Dec 16 13:06:13.514532 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:06:13.805627 sudo[2136]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:06:13.805850 sudo[2136]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:13.812498 sudo[2136]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:13.816366 sudo[2135]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:06:13.816619 sudo[2135]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:13.824509 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:06:13.856026 augenrules[2158]: No rules Dec 16 13:06:13.857070 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:06:13.857261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:06:13.857983 sudo[2135]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:13.944771 sshd[2134]: Connection closed by 10.200.16.10 port 37420 Dec 16 13:06:13.945231 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:13.948572 systemd[1]: sshd@5-10.200.0.33:22-10.200.16.10:37420.service: Deactivated successfully. Dec 16 13:06:13.950048 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:06:13.950700 systemd-logind[1691]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:06:13.951755 systemd-logind[1691]: Removed session 8. Dec 16 13:06:14.043014 systemd[1]: Started sshd@6-10.200.0.33:22-10.200.16.10:37436.service - OpenSSH per-connection server daemon (10.200.16.10:37436). Dec 16 13:06:14.603088 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 37436 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:14.604155 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:14.608342 systemd-logind[1691]: New session 9 of user core. Dec 16 13:06:14.614526 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:06:14.908627 sudo[2171]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:06:14.908851 sudo[2171]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:06:16.530223 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:06:16.676631 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:06:16.693683 (dockerd)[2190]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:06:17.434332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:06:17.435853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:17.993549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:18.003627 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:18.034956 kubelet[2203]: E1216 13:06:18.034901 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:18.036485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:18.036620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:18.036924 systemd[1]: kubelet.service: Consumed 129ms CPU time, 110.1M memory peak. Dec 16 13:06:18.380032 dockerd[2190]: time="2025-12-16T13:06:18.379914472Z" level=info msg="Starting up" Dec 16 13:06:18.381377 dockerd[2190]: time="2025-12-16T13:06:18.380947353Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:06:18.391657 dockerd[2190]: time="2025-12-16T13:06:18.391615800Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:06:18.424490 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1431138858-merged.mount: Deactivated successfully. Dec 16 13:06:18.551947 dockerd[2190]: time="2025-12-16T13:06:18.551903642Z" level=info msg="Loading containers: start." Dec 16 13:06:18.595426 kernel: Initializing XFRM netlink socket Dec 16 13:06:18.983651 systemd-networkd[1337]: docker0: Link UP Dec 16 13:06:18.997927 dockerd[2190]: time="2025-12-16T13:06:18.997891009Z" level=info msg="Loading containers: done." Dec 16 13:06:19.027277 dockerd[2190]: time="2025-12-16T13:06:19.027237605Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:06:19.027427 dockerd[2190]: time="2025-12-16T13:06:19.027312838Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:06:19.027427 dockerd[2190]: time="2025-12-16T13:06:19.027389403Z" level=info msg="Initializing buildkit" Dec 16 13:06:19.071060 dockerd[2190]: time="2025-12-16T13:06:19.071019427Z" level=info msg="Completed buildkit initialization" Dec 16 13:06:19.078021 dockerd[2190]: time="2025-12-16T13:06:19.077976891Z" level=info msg="Daemon has completed initialization" Dec 16 13:06:19.079133 dockerd[2190]: time="2025-12-16T13:06:19.078136889Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:06:19.078233 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:06:19.288557 update_engine[1693]: I20251216 13:06:19.287118 1693 update_attempter.cc:509] Updating boot flags... Dec 16 13:06:19.421198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2832275527-merged.mount: Deactivated successfully. Dec 16 13:06:19.858544 containerd[1711]: time="2025-12-16T13:06:19.858476965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:06:20.485787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504545609.mount: Deactivated successfully. Dec 16 13:06:21.638063 containerd[1711]: time="2025-12-16T13:06:21.638013347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:21.640919 containerd[1711]: time="2025-12-16T13:06:21.640888453Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27067511" Dec 16 13:06:21.644630 containerd[1711]: time="2025-12-16T13:06:21.644584787Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:21.649027 containerd[1711]: time="2025-12-16T13:06:21.648973571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:21.649863 containerd[1711]: time="2025-12-16T13:06:21.649617156Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.791097901s" Dec 16 13:06:21.649863 containerd[1711]: time="2025-12-16T13:06:21.649650459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:06:21.650521 containerd[1711]: time="2025-12-16T13:06:21.650485164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:06:22.572467 containerd[1711]: time="2025-12-16T13:06:22.572420390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.575166 containerd[1711]: time="2025-12-16T13:06:22.575134050Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162372" Dec 16 13:06:22.578128 containerd[1711]: time="2025-12-16T13:06:22.578084482Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.583339 containerd[1711]: time="2025-12-16T13:06:22.583113218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:22.584054 containerd[1711]: time="2025-12-16T13:06:22.584024549Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 933.513962ms" Dec 16 13:06:22.584109 containerd[1711]: time="2025-12-16T13:06:22.584054049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:06:22.584678 containerd[1711]: time="2025-12-16T13:06:22.584654187Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:06:23.394598 containerd[1711]: time="2025-12-16T13:06:23.394545517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:23.397416 containerd[1711]: time="2025-12-16T13:06:23.397365494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725859" Dec 16 13:06:23.401346 containerd[1711]: time="2025-12-16T13:06:23.401308522Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:23.405587 containerd[1711]: time="2025-12-16T13:06:23.405546063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:23.406325 containerd[1711]: time="2025-12-16T13:06:23.406192944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 821.510127ms" Dec 16 13:06:23.406325 containerd[1711]: time="2025-12-16T13:06:23.406224809Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:06:23.406898 containerd[1711]: time="2025-12-16T13:06:23.406875152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:06:24.259095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842445424.mount: Deactivated successfully. Dec 16 13:06:24.576527 containerd[1711]: time="2025-12-16T13:06:24.576385760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:24.579950 containerd[1711]: time="2025-12-16T13:06:24.579908589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965187" Dec 16 13:06:24.583848 containerd[1711]: time="2025-12-16T13:06:24.583799729Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:24.590359 containerd[1711]: time="2025-12-16T13:06:24.589800148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:24.590359 containerd[1711]: time="2025-12-16T13:06:24.590193619Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.183292089s" Dec 16 13:06:24.590359 containerd[1711]: time="2025-12-16T13:06:24.590216846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:06:24.590793 containerd[1711]: time="2025-12-16T13:06:24.590769503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:06:25.065669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576698730.mount: Deactivated successfully. Dec 16 13:06:26.147457 containerd[1711]: time="2025-12-16T13:06:26.147408355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.150160 containerd[1711]: time="2025-12-16T13:06:26.150120217Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Dec 16 13:06:26.153512 containerd[1711]: time="2025-12-16T13:06:26.153471440Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.159025 containerd[1711]: time="2025-12-16T13:06:26.158774584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.159488 containerd[1711]: time="2025-12-16T13:06:26.159467211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.568671467s" Dec 16 13:06:26.159532 containerd[1711]: time="2025-12-16T13:06:26.159497796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:06:26.160047 containerd[1711]: time="2025-12-16T13:06:26.159927693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:06:26.587961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031492002.mount: Deactivated successfully. Dec 16 13:06:26.608146 containerd[1711]: time="2025-12-16T13:06:26.608105182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.611939 containerd[1711]: time="2025-12-16T13:06:26.611791620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Dec 16 13:06:26.615515 containerd[1711]: time="2025-12-16T13:06:26.615491759Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.619657 containerd[1711]: time="2025-12-16T13:06:26.619619427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:26.620068 containerd[1711]: time="2025-12-16T13:06:26.620047152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 460.091833ms" Dec 16 13:06:26.620137 containerd[1711]: time="2025-12-16T13:06:26.620127135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:06:26.620649 containerd[1711]: time="2025-12-16T13:06:26.620624439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:06:27.134488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143547577.mount: Deactivated successfully. Dec 16 13:06:28.184312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 13:06:28.186941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:28.604494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:28.612895 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:28.671431 kubelet[2624]: E1216 13:06:28.670563 2624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:28.674053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:28.674263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:28.674923 systemd[1]: kubelet.service: Consumed 148ms CPU time, 109.8M memory peak. Dec 16 13:06:29.238733 containerd[1711]: time="2025-12-16T13:06:29.238682620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:29.241987 containerd[1711]: time="2025-12-16T13:06:29.241957191Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166518" Dec 16 13:06:29.246147 containerd[1711]: time="2025-12-16T13:06:29.246104950Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:29.251563 containerd[1711]: time="2025-12-16T13:06:29.250784605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:29.251563 containerd[1711]: time="2025-12-16T13:06:29.251423309Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.630772401s" Dec 16 13:06:29.251563 containerd[1711]: time="2025-12-16T13:06:29.251451019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:06:31.677929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:31.678080 systemd[1]: kubelet.service: Consumed 148ms CPU time, 109.8M memory peak. Dec 16 13:06:31.680126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:31.702775 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-9.scope)... Dec 16 13:06:31.702788 systemd[1]: Reloading... Dec 16 13:06:31.791480 zram_generator::config[2713]: No configuration found. Dec 16 13:06:31.988823 systemd[1]: Reloading finished in 285 ms. Dec 16 13:06:32.105114 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:06:32.105210 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:06:32.105490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:32.105543 systemd[1]: kubelet.service: Consumed 83ms CPU time, 83.1M memory peak. Dec 16 13:06:32.107722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:32.623659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:32.633677 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:32.667500 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:32.667500 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:32.667759 kubelet[2774]: I1216 13:06:32.667549 2774 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:33.018284 kubelet[2774]: I1216 13:06:33.018244 2774 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:06:33.018284 kubelet[2774]: I1216 13:06:33.018270 2774 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:33.020557 kubelet[2774]: I1216 13:06:33.020539 2774 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:06:33.020607 kubelet[2774]: I1216 13:06:33.020557 2774 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:33.020793 kubelet[2774]: I1216 13:06:33.020779 2774 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:06:33.029037 kubelet[2774]: E1216 13:06:33.029001 2774 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:06:33.029447 kubelet[2774]: I1216 13:06:33.029428 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:33.032650 kubelet[2774]: I1216 13:06:33.032621 2774 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:33.037415 kubelet[2774]: I1216 13:06:33.036742 2774 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:06:33.037415 kubelet[2774]: I1216 13:06:33.036965 2774 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:33.037415 kubelet[2774]: I1216 13:06:33.037002 2774 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-22a3eae3ac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:33.037415 kubelet[2774]: I1216 13:06:33.037268 2774 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:33.037645 kubelet[2774]: I1216 13:06:33.037278 2774 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:06:33.037645 kubelet[2774]: I1216 13:06:33.037380 2774 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:06:33.042145 kubelet[2774]: I1216 13:06:33.042123 2774 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:33.042809 kubelet[2774]: I1216 13:06:33.042782 2774 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:06:33.042809 kubelet[2774]: I1216 13:06:33.042811 2774 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:33.042887 kubelet[2774]: I1216 13:06:33.042833 2774 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:06:33.042887 kubelet[2774]: I1216 13:06:33.042850 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:33.046220 kubelet[2774]: E1216 13:06:33.046191 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:06:33.047276 kubelet[2774]: E1216 13:06:33.046322 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-22a3eae3ac&limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:06:33.047276 kubelet[2774]: I1216 13:06:33.046443 2774 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:33.047276 kubelet[2774]: I1216 13:06:33.046982 2774 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:06:33.047276 kubelet[2774]: I1216 13:06:33.047012 2774 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:06:33.047276 kubelet[2774]: W1216 13:06:33.047054 2774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:06:33.051015 kubelet[2774]: I1216 13:06:33.050589 2774 server.go:1262] "Started kubelet" Dec 16 13:06:33.051989 kubelet[2774]: I1216 13:06:33.051357 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:33.057704 kubelet[2774]: I1216 13:06:33.057671 2774 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:33.058913 kubelet[2774]: I1216 13:06:33.058892 2774 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:06:33.063428 kubelet[2774]: E1216 13:06:33.061916 2774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-22a3eae3ac.1881b3f4d60db007 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-22a3eae3ac,UID:ci-4459.2.2-a-22a3eae3ac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-22a3eae3ac,},FirstTimestamp:2025-12-16 13:06:33.050558471 +0000 UTC m=+0.413861862,LastTimestamp:2025-12-16 13:06:33.050558471 +0000 UTC m=+0.413861862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-22a3eae3ac,}" Dec 16 13:06:33.063561 kubelet[2774]: I1216 13:06:33.063459 2774 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:33.063561 kubelet[2774]: I1216 13:06:33.063503 2774 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:06:33.064415 kubelet[2774]: I1216 13:06:33.063672 2774 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:33.064415 kubelet[2774]: I1216 13:06:33.063923 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:33.064415 kubelet[2774]: I1216 13:06:33.064305 2774 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:06:33.064584 kubelet[2774]: E1216 13:06:33.064569 2774 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:33.066538 kubelet[2774]: E1216 13:06:33.066516 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-22a3eae3ac?timeout=10s\": dial tcp 10.200.0.33:6443: connect: connection refused" interval="200ms" Dec 16 13:06:33.068411 kubelet[2774]: I1216 13:06:33.067201 2774 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:06:33.068411 kubelet[2774]: I1216 13:06:33.067280 2774 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:33.068411 kubelet[2774]: I1216 13:06:33.067527 2774 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:06:33.068411 kubelet[2774]: I1216 13:06:33.067569 2774 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:06:33.068411 kubelet[2774]: E1216 13:06:33.068138 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:06:33.068935 kubelet[2774]: I1216 13:06:33.068921 2774 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:06:33.084708 kubelet[2774]: E1216 13:06:33.084678 2774 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:33.087419 kubelet[2774]: I1216 13:06:33.086672 2774 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:33.088389 kubelet[2774]: I1216 13:06:33.088354 2774 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:33.088663 kubelet[2774]: I1216 13:06:33.088391 2774 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:06:33.088697 kubelet[2774]: I1216 13:06:33.088681 2774 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:06:33.088821 kubelet[2774]: E1216 13:06:33.088715 2774 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:33.092547 kubelet[2774]: E1216 13:06:33.092517 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:06:33.097327 kubelet[2774]: I1216 13:06:33.097305 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:33.097327 kubelet[2774]: I1216 13:06:33.097325 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:33.097438 kubelet[2774]: I1216 13:06:33.097339 2774 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:33.102488 kubelet[2774]: I1216 13:06:33.102470 2774 policy_none.go:49] "None policy: Start" Dec 16 13:06:33.102552 kubelet[2774]: I1216 13:06:33.102494 2774 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:06:33.102552 kubelet[2774]: I1216 13:06:33.102506 2774 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:06:33.107864 kubelet[2774]: I1216 13:06:33.107851 2774 policy_none.go:47] "Start" Dec 16 13:06:33.111140 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:06:33.124491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:06:33.127391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:06:33.134952 kubelet[2774]: E1216 13:06:33.134932 2774 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:06:33.135106 kubelet[2774]: I1216 13:06:33.135089 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:33.135141 kubelet[2774]: I1216 13:06:33.135107 2774 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:33.135458 kubelet[2774]: I1216 13:06:33.135436 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:33.137293 kubelet[2774]: E1216 13:06:33.137163 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:33.137293 kubelet[2774]: E1216 13:06:33.137202 2774 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:33.200736 systemd[1]: Created slice kubepods-burstable-podffe853898a662577e8a6f3531f49f397.slice - libcontainer container kubepods-burstable-podffe853898a662577e8a6f3531f49f397.slice. Dec 16 13:06:33.210132 kubelet[2774]: E1216 13:06:33.210056 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.214137 systemd[1]: Created slice kubepods-burstable-pod5067bc7df7fb561b67144686a58bab28.slice - libcontainer container kubepods-burstable-pod5067bc7df7fb561b67144686a58bab28.slice. Dec 16 13:06:33.216087 kubelet[2774]: E1216 13:06:33.216052 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.218221 systemd[1]: Created slice kubepods-burstable-podb4b0983311b2442522c83d2757ffa5f7.slice - libcontainer container kubepods-burstable-podb4b0983311b2442522c83d2757ffa5f7.slice. Dec 16 13:06:33.219755 kubelet[2774]: E1216 13:06:33.219627 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.236812 kubelet[2774]: I1216 13:06:33.236791 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.237168 kubelet[2774]: E1216 13:06:33.237145 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.33:6443/api/v1/nodes\": dial tcp 10.200.0.33:6443: connect: connection refused" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.267644 kubelet[2774]: E1216 13:06:33.267611 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-22a3eae3ac?timeout=10s\": dial tcp 10.200.0.33:6443: connect: connection refused" interval="400ms" Dec 16 13:06:33.269028 kubelet[2774]: I1216 13:06:33.268766 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269028 kubelet[2774]: I1216 13:06:33.268797 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269028 kubelet[2774]: I1216 13:06:33.268816 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269028 kubelet[2774]: I1216 13:06:33.268835 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269028 kubelet[2774]: I1216 13:06:33.268860 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269187 kubelet[2774]: I1216 13:06:33.268884 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269187 kubelet[2774]: I1216 13:06:33.268912 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269187 kubelet[2774]: I1216 13:06:33.268935 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.269187 kubelet[2774]: I1216 13:06:33.268979 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4b0983311b2442522c83d2757ffa5f7-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-22a3eae3ac\" (UID: \"b4b0983311b2442522c83d2757ffa5f7\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.438864 kubelet[2774]: I1216 13:06:33.438833 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.439207 kubelet[2774]: E1216 13:06:33.439182 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.33:6443/api/v1/nodes\": dial tcp 10.200.0.33:6443: connect: connection refused" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.516776 containerd[1711]: time="2025-12-16T13:06:33.516729018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-22a3eae3ac,Uid:ffe853898a662577e8a6f3531f49f397,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:33.521613 containerd[1711]: time="2025-12-16T13:06:33.521529276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-22a3eae3ac,Uid:5067bc7df7fb561b67144686a58bab28,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:33.526359 containerd[1711]: time="2025-12-16T13:06:33.526331456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-22a3eae3ac,Uid:b4b0983311b2442522c83d2757ffa5f7,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:33.668738 kubelet[2774]: E1216 13:06:33.668674 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-22a3eae3ac?timeout=10s\": dial tcp 10.200.0.33:6443: connect: connection refused" interval="800ms" Dec 16 13:06:33.840949 kubelet[2774]: I1216 13:06:33.840867 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.841188 kubelet[2774]: E1216 13:06:33.841165 2774 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.33:6443/api/v1/nodes\": dial tcp 10.200.0.33:6443: connect: connection refused" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:33.967741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535379822.mount: Deactivated successfully. Dec 16 13:06:33.992859 containerd[1711]: time="2025-12-16T13:06:33.992819387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:34.007490 containerd[1711]: time="2025-12-16T13:06:34.007374671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Dec 16 13:06:34.012137 containerd[1711]: time="2025-12-16T13:06:34.012110711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:34.015823 containerd[1711]: time="2025-12-16T13:06:34.015789417Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:34.021413 containerd[1711]: time="2025-12-16T13:06:34.021366777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:06:34.025002 containerd[1711]: time="2025-12-16T13:06:34.024957917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:34.028614 containerd[1711]: time="2025-12-16T13:06:34.028588899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:06:34.029096 containerd[1711]: time="2025-12-16T13:06:34.029074647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.136291ms" Dec 16 13:06:34.031525 containerd[1711]: time="2025-12-16T13:06:34.031471112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:06:34.034993 containerd[1711]: time="2025-12-16T13:06:34.034962289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.9001ms" Dec 16 13:06:34.046624 kubelet[2774]: E1216 13:06:34.046595 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:06:34.065862 kubelet[2774]: E1216 13:06:34.065822 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:06:34.101409 containerd[1711]: time="2025-12-16T13:06:34.101271957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 567.121608ms" Dec 16 13:06:34.115410 containerd[1711]: time="2025-12-16T13:06:34.115254508Z" level=info msg="connecting to shim 27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70" address="unix:///run/containerd/s/2c457149e4c74b841694343276c7f1ba07da70b2a6b2b4dfd2ac1c1ecbdd730b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:34.116578 containerd[1711]: time="2025-12-16T13:06:34.116024443Z" level=info msg="connecting to shim 307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685" address="unix:///run/containerd/s/c21c5e9c929151e165771773b139554563ef31c789607b371bccca827ed7b020" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:34.148839 containerd[1711]: time="2025-12-16T13:06:34.148808830Z" level=info msg="connecting to shim 2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a" address="unix:///run/containerd/s/2cb352f04e1e807ac506222fec8413d8b1bb16be3e99022bf3253c804f173f8e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:34.150690 systemd[1]: Started cri-containerd-27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70.scope - libcontainer container 27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70. Dec 16 13:06:34.152380 systemd[1]: Started cri-containerd-307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685.scope - libcontainer container 307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685. Dec 16 13:06:34.185514 systemd[1]: Started cri-containerd-2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a.scope - libcontainer container 2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a. Dec 16 13:06:34.227837 kubelet[2774]: E1216 13:06:34.227720 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:06:34.230463 containerd[1711]: time="2025-12-16T13:06:34.230431851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-22a3eae3ac,Uid:5067bc7df7fb561b67144686a58bab28,Namespace:kube-system,Attempt:0,} returns sandbox id \"27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70\"" Dec 16 13:06:34.235049 containerd[1711]: time="2025-12-16T13:06:34.235010029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-22a3eae3ac,Uid:ffe853898a662577e8a6f3531f49f397,Namespace:kube-system,Attempt:0,} returns sandbox id \"307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685\"" Dec 16 13:06:34.241844 containerd[1711]: time="2025-12-16T13:06:34.241767986Z" level=info msg="CreateContainer within sandbox \"27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:06:34.245276 containerd[1711]: time="2025-12-16T13:06:34.245203452Z" level=info msg="CreateContainer within sandbox \"307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:06:34.267575 containerd[1711]: time="2025-12-16T13:06:34.267547264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-22a3eae3ac,Uid:b4b0983311b2442522c83d2757ffa5f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a\"" Dec 16 13:06:34.274430 containerd[1711]: time="2025-12-16T13:06:34.274060262Z" level=info msg="Container 8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:34.275508 containerd[1711]: time="2025-12-16T13:06:34.275468603Z" level=info msg="CreateContainer within sandbox \"2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:06:34.278111 containerd[1711]: time="2025-12-16T13:06:34.277656779Z" level=info msg="Container d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:34.290818 containerd[1711]: time="2025-12-16T13:06:34.290795507Z" level=info msg="CreateContainer within sandbox \"27390806662335b774c2c47f857aaf8e6a4d7691246d9c9aaf0c6976dfb95a70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1\"" Dec 16 13:06:34.291358 containerd[1711]: time="2025-12-16T13:06:34.291336639Z" level=info msg="StartContainer for \"8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1\"" Dec 16 13:06:34.292274 containerd[1711]: time="2025-12-16T13:06:34.292243798Z" level=info msg="connecting to shim 8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1" address="unix:///run/containerd/s/2c457149e4c74b841694343276c7f1ba07da70b2a6b2b4dfd2ac1c1ecbdd730b" protocol=ttrpc version=3 Dec 16 13:06:34.308553 systemd[1]: Started cri-containerd-8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1.scope - libcontainer container 8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1. Dec 16 13:06:34.310969 containerd[1711]: time="2025-12-16T13:06:34.310920491Z" level=info msg="CreateContainer within sandbox \"307bbd191949c772f3bcbc155ee8a41ed5a3e91be268dddec92ac6214b725685\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f\"" Dec 16 13:06:34.311897 containerd[1711]: time="2025-12-16T13:06:34.311859641Z" level=info msg="StartContainer for \"d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f\"" Dec 16 13:06:34.313818 containerd[1711]: time="2025-12-16T13:06:34.313687387Z" level=info msg="connecting to shim d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f" address="unix:///run/containerd/s/c21c5e9c929151e165771773b139554563ef31c789607b371bccca827ed7b020" protocol=ttrpc version=3 Dec 16 13:06:34.318449 containerd[1711]: time="2025-12-16T13:06:34.317753366Z" level=info msg="Container cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:34.334515 systemd[1]: Started cri-containerd-d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f.scope - libcontainer container d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f. Dec 16 13:06:34.343526 containerd[1711]: time="2025-12-16T13:06:34.343499546Z" level=info msg="CreateContainer within sandbox \"2e243af0e694cf81bd17d55f860271910f0a494b91e3dcc10f9b987ecbacf94a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7\"" Dec 16 13:06:34.343944 containerd[1711]: time="2025-12-16T13:06:34.343923192Z" level=info msg="StartContainer for \"cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7\"" Dec 16 13:06:34.349643 containerd[1711]: time="2025-12-16T13:06:34.349610419Z" level=info msg="connecting to shim cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7" address="unix:///run/containerd/s/2cb352f04e1e807ac506222fec8413d8b1bb16be3e99022bf3253c804f173f8e" protocol=ttrpc version=3 Dec 16 13:06:34.374669 systemd[1]: Started cri-containerd-cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7.scope - libcontainer container cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7. Dec 16 13:06:34.382143 kubelet[2774]: E1216 13:06:34.382109 2774 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-22a3eae3ac&limit=500&resourceVersion=0\": dial tcp 10.200.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:06:34.392566 containerd[1711]: time="2025-12-16T13:06:34.392531219Z" level=info msg="StartContainer for \"8d844d530a288ac2c61ad77a1cbd676042bad8034c648e5ff2b3640cf886cba1\" returns successfully" Dec 16 13:06:34.431590 containerd[1711]: time="2025-12-16T13:06:34.431511941Z" level=info msg="StartContainer for \"d5d9803730e85bb379f66aab9e30be24673b6b66080950de22d012fa6769744f\" returns successfully" Dec 16 13:06:34.449564 containerd[1711]: time="2025-12-16T13:06:34.449538347Z" level=info msg="StartContainer for \"cef66936ce413332f02e5183746a12fc97dfe0991b7ac6bebe9021345f0e7aa7\" returns successfully" Dec 16 13:06:34.469581 kubelet[2774]: E1216 13:06:34.469549 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-22a3eae3ac?timeout=10s\": dial tcp 10.200.0.33:6443: connect: connection refused" interval="1.6s" Dec 16 13:06:34.643682 kubelet[2774]: I1216 13:06:34.643585 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:35.106718 kubelet[2774]: E1216 13:06:35.106681 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:35.116767 kubelet[2774]: E1216 13:06:35.116737 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:35.121084 kubelet[2774]: E1216 13:06:35.121060 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:36.122197 kubelet[2774]: E1216 13:06:36.122122 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:36.124694 kubelet[2774]: E1216 13:06:36.124434 2774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:36.775018 kubelet[2774]: I1216 13:06:36.774887 2774 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:36.775018 kubelet[2774]: E1216 13:06:36.774927 2774 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-22a3eae3ac\": node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:36.815463 kubelet[2774]: E1216 13:06:36.815425 2774 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:36.822867 kubelet[2774]: E1216 13:06:36.822711 2774 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.2-a-22a3eae3ac.1881b3f4d60db007 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-22a3eae3ac,UID:ci-4459.2.2-a-22a3eae3ac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-22a3eae3ac,},FirstTimestamp:2025-12-16 13:06:33.050558471 +0000 UTC m=+0.413861862,LastTimestamp:2025-12-16 13:06:33.050558471 +0000 UTC m=+0.413861862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-22a3eae3ac,}" Dec 16 13:06:36.875608 kubelet[2774]: E1216 13:06:36.875565 2774 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 16 13:06:36.916458 kubelet[2774]: E1216 13:06:36.916420 2774 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:37.016950 kubelet[2774]: E1216 13:06:37.016825 2774 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:37.166065 kubelet[2774]: I1216 13:06:37.165675 2774 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:37.169414 kubelet[2774]: E1216 13:06:37.169370 2774 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:37.169414 kubelet[2774]: I1216 13:06:37.169408 2774 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:37.170720 kubelet[2774]: E1216 13:06:37.170684 2774 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:37.170720 kubelet[2774]: I1216 13:06:37.170708 2774 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:37.172000 kubelet[2774]: E1216 13:06:37.171975 2774 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-22a3eae3ac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:38.047230 kubelet[2774]: I1216 13:06:38.047195 2774 apiserver.go:52] "Watching apiserver" Dec 16 13:06:38.067690 kubelet[2774]: I1216 13:06:38.067649 2774 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:06:38.487033 kubelet[2774]: I1216 13:06:38.486991 2774 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:38.495287 kubelet[2774]: I1216 13:06:38.495255 2774 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:39.078045 systemd[1]: Reload requested from client PID 3063 ('systemctl') (unit session-9.scope)... Dec 16 13:06:39.078061 systemd[1]: Reloading... Dec 16 13:06:39.158427 zram_generator::config[3116]: No configuration found. Dec 16 13:06:39.355750 systemd[1]: Reloading finished in 277 ms. Dec 16 13:06:39.384651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:39.408319 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:06:39.408569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:39.408620 systemd[1]: kubelet.service: Consumed 742ms CPU time, 125.9M memory peak. Dec 16 13:06:39.410326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:39.791572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:39.802673 (kubelet)[3177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:06:39.845414 kubelet[3177]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:06:39.845414 kubelet[3177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:06:39.845414 kubelet[3177]: I1216 13:06:39.845127 3177 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:06:39.851820 kubelet[3177]: I1216 13:06:39.851793 3177 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:06:39.851820 kubelet[3177]: I1216 13:06:39.851814 3177 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:06:39.851949 kubelet[3177]: I1216 13:06:39.851838 3177 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:06:39.851949 kubelet[3177]: I1216 13:06:39.851844 3177 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:06:39.852062 kubelet[3177]: I1216 13:06:39.852045 3177 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:06:39.854341 kubelet[3177]: I1216 13:06:39.854275 3177 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:06:39.857965 kubelet[3177]: I1216 13:06:39.857895 3177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:06:39.861814 kubelet[3177]: I1216 13:06:39.861795 3177 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:06:39.864618 kubelet[3177]: I1216 13:06:39.864596 3177 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:06:39.864774 kubelet[3177]: I1216 13:06:39.864753 3177 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:06:39.864899 kubelet[3177]: I1216 13:06:39.864774 3177 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-22a3eae3ac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:06:39.864996 kubelet[3177]: I1216 13:06:39.864905 3177 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:06:39.864996 kubelet[3177]: I1216 13:06:39.864915 3177 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:06:39.864996 kubelet[3177]: I1216 13:06:39.864935 3177 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:06:39.865511 kubelet[3177]: I1216 13:06:39.865499 3177 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:39.865981 kubelet[3177]: I1216 13:06:39.865615 3177 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:06:39.865981 kubelet[3177]: I1216 13:06:39.865631 3177 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:06:39.865981 kubelet[3177]: I1216 13:06:39.865653 3177 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:06:39.865981 kubelet[3177]: I1216 13:06:39.865668 3177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:06:39.871508 kubelet[3177]: I1216 13:06:39.871491 3177 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:06:39.871991 kubelet[3177]: I1216 13:06:39.871971 3177 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:06:39.872043 kubelet[3177]: I1216 13:06:39.872002 3177 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:06:39.875202 kubelet[3177]: I1216 13:06:39.874120 3177 server.go:1262] "Started kubelet" Dec 16 13:06:39.876320 kubelet[3177]: I1216 13:06:39.876240 3177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:06:39.879204 kubelet[3177]: I1216 13:06:39.879162 3177 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:06:39.881570 kubelet[3177]: I1216 13:06:39.881552 3177 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:06:39.882442 kubelet[3177]: E1216 13:06:39.882428 3177 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:06:39.882901 kubelet[3177]: I1216 13:06:39.882841 3177 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:06:39.883011 kubelet[3177]: I1216 13:06:39.883002 3177 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:06:39.883272 kubelet[3177]: I1216 13:06:39.883260 3177 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:06:39.884286 kubelet[3177]: I1216 13:06:39.884264 3177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:06:39.886737 kubelet[3177]: I1216 13:06:39.886334 3177 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:06:39.886737 kubelet[3177]: E1216 13:06:39.886556 3177 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-22a3eae3ac\" not found" Dec 16 13:06:39.891175 kubelet[3177]: I1216 13:06:39.891153 3177 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:06:39.891433 kubelet[3177]: I1216 13:06:39.891259 3177 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:06:39.892830 kubelet[3177]: I1216 13:06:39.891744 3177 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:06:39.892830 kubelet[3177]: I1216 13:06:39.891863 3177 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:06:39.895229 kubelet[3177]: I1216 13:06:39.895198 3177 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:06:39.902481 kubelet[3177]: I1216 13:06:39.902446 3177 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:06:39.903286 kubelet[3177]: I1216 13:06:39.903264 3177 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:06:39.903286 kubelet[3177]: I1216 13:06:39.903281 3177 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:06:39.903363 kubelet[3177]: I1216 13:06:39.903302 3177 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:06:39.903363 kubelet[3177]: E1216 13:06:39.903333 3177 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:06:39.938890 kubelet[3177]: I1216 13:06:39.938866 3177 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:06:39.938890 kubelet[3177]: I1216 13:06:39.938880 3177 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939054 3177 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939177 3177 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939187 3177 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939203 3177 policy_none.go:49] "None policy: Start" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939213 3177 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939222 3177 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939307 3177 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:06:39.939608 kubelet[3177]: I1216 13:06:39.939314 3177 policy_none.go:47] "Start" Dec 16 13:06:39.942794 kubelet[3177]: E1216 13:06:39.942777 3177 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:06:39.942922 kubelet[3177]: I1216 13:06:39.942911 3177 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:06:39.942974 kubelet[3177]: I1216 13:06:39.942924 3177 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:06:39.945043 kubelet[3177]: I1216 13:06:39.944219 3177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:06:39.947832 kubelet[3177]: E1216 13:06:39.947816 3177 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:06:40.004068 kubelet[3177]: I1216 13:06:40.004044 3177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.005699 kubelet[3177]: I1216 13:06:40.004219 3177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.005839 kubelet[3177]: I1216 13:06:40.004368 3177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.011156 kubelet[3177]: I1216 13:06:40.011133 3177 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:40.016085 kubelet[3177]: I1216 13:06:40.016056 3177 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:40.016735 kubelet[3177]: I1216 13:06:40.016714 3177 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:40.016822 kubelet[3177]: E1216 13:06:40.016758 3177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-22a3eae3ac\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.045879 kubelet[3177]: I1216 13:06:40.045592 3177 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.056243 kubelet[3177]: I1216 13:06:40.056223 3177 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.056320 kubelet[3177]: I1216 13:06:40.056279 3177 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093180 kubelet[3177]: I1216 13:06:40.093151 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093261 kubelet[3177]: I1216 13:06:40.093190 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093261 kubelet[3177]: I1216 13:06:40.093209 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093261 kubelet[3177]: I1216 13:06:40.093226 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093261 kubelet[3177]: I1216 13:06:40.093245 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4b0983311b2442522c83d2757ffa5f7-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-22a3eae3ac\" (UID: \"b4b0983311b2442522c83d2757ffa5f7\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093363 kubelet[3177]: I1216 13:06:40.093267 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093363 kubelet[3177]: I1216 13:06:40.093284 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe853898a662577e8a6f3531f49f397-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" (UID: \"ffe853898a662577e8a6f3531f49f397\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093363 kubelet[3177]: I1216 13:06:40.093302 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.093363 kubelet[3177]: I1216 13:06:40.093322 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5067bc7df7fb561b67144686a58bab28-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-22a3eae3ac\" (UID: \"5067bc7df7fb561b67144686a58bab28\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.866640 kubelet[3177]: I1216 13:06:40.866612 3177 apiserver.go:52] "Watching apiserver" Dec 16 13:06:40.892145 kubelet[3177]: I1216 13:06:40.892114 3177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:06:40.925938 kubelet[3177]: I1216 13:06:40.925555 3177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.926619 kubelet[3177]: I1216 13:06:40.926605 3177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.941053 kubelet[3177]: I1216 13:06:40.940551 3177 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:40.941053 kubelet[3177]: E1216 13:06:40.940604 3177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-22a3eae3ac\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.941053 kubelet[3177]: I1216 13:06:40.940765 3177 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:06:40.941053 kubelet[3177]: E1216 13:06:40.940787 3177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-22a3eae3ac\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" Dec 16 13:06:40.991240 kubelet[3177]: I1216 13:06:40.991180 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-22a3eae3ac" podStartSLOduration=0.991163196 podStartE2EDuration="991.163196ms" podCreationTimestamp="2025-12-16 13:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:40.978519721 +0000 UTC m=+1.172017927" watchObservedRunningTime="2025-12-16 13:06:40.991163196 +0000 UTC m=+1.184661401" Dec 16 13:06:41.009177 kubelet[3177]: I1216 13:06:41.009131 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-22a3eae3ac" podStartSLOduration=1.009116155 podStartE2EDuration="1.009116155s" podCreationTimestamp="2025-12-16 13:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:40.99296843 +0000 UTC m=+1.186466637" watchObservedRunningTime="2025-12-16 13:06:41.009116155 +0000 UTC m=+1.202614360" Dec 16 13:06:41.018344 kubelet[3177]: I1216 13:06:41.018214 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-22a3eae3ac" podStartSLOduration=3.018198575 podStartE2EDuration="3.018198575s" podCreationTimestamp="2025-12-16 13:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:41.009352678 +0000 UTC m=+1.202850887" watchObservedRunningTime="2025-12-16 13:06:41.018198575 +0000 UTC m=+1.211696780" Dec 16 13:06:44.790769 kubelet[3177]: I1216 13:06:44.790606 3177 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:06:44.791237 kubelet[3177]: I1216 13:06:44.791124 3177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:06:44.791271 containerd[1711]: time="2025-12-16T13:06:44.790909447Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:06:45.377553 systemd[1]: Created slice kubepods-besteffort-podd132e381_fe7c_412c_b658_cc3669d5b81f.slice - libcontainer container kubepods-besteffort-podd132e381_fe7c_412c_b658_cc3669d5b81f.slice. Dec 16 13:06:45.419446 kubelet[3177]: I1216 13:06:45.419340 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d132e381-fe7c-412c-b658-cc3669d5b81f-xtables-lock\") pod \"kube-proxy-4429b\" (UID: \"d132e381-fe7c-412c-b658-cc3669d5b81f\") " pod="kube-system/kube-proxy-4429b" Dec 16 13:06:45.419585 kubelet[3177]: I1216 13:06:45.419455 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwjh\" (UniqueName: \"kubernetes.io/projected/d132e381-fe7c-412c-b658-cc3669d5b81f-kube-api-access-dmwjh\") pod \"kube-proxy-4429b\" (UID: \"d132e381-fe7c-412c-b658-cc3669d5b81f\") " pod="kube-system/kube-proxy-4429b" Dec 16 13:06:45.419585 kubelet[3177]: I1216 13:06:45.419478 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d132e381-fe7c-412c-b658-cc3669d5b81f-kube-proxy\") pod \"kube-proxy-4429b\" (UID: \"d132e381-fe7c-412c-b658-cc3669d5b81f\") " pod="kube-system/kube-proxy-4429b" Dec 16 13:06:45.419585 kubelet[3177]: I1216 13:06:45.419532 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d132e381-fe7c-412c-b658-cc3669d5b81f-lib-modules\") pod \"kube-proxy-4429b\" (UID: \"d132e381-fe7c-412c-b658-cc3669d5b81f\") " pod="kube-system/kube-proxy-4429b" Dec 16 13:06:45.692340 containerd[1711]: time="2025-12-16T13:06:45.692303242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4429b,Uid:d132e381-fe7c-412c-b658-cc3669d5b81f,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:45.746945 containerd[1711]: time="2025-12-16T13:06:45.746879622Z" level=info msg="connecting to shim 0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c" address="unix:///run/containerd/s/496e43fd68fad8f82439daeca5bc8c371ef509ba60e538cde5a39a1a46c199ce" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:45.774534 systemd[1]: Started cri-containerd-0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c.scope - libcontainer container 0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c. Dec 16 13:06:45.803518 containerd[1711]: time="2025-12-16T13:06:45.803175372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4429b,Uid:d132e381-fe7c-412c-b658-cc3669d5b81f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c\"" Dec 16 13:06:45.814152 containerd[1711]: time="2025-12-16T13:06:45.814119902Z" level=info msg="CreateContainer within sandbox \"0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:06:45.837638 containerd[1711]: time="2025-12-16T13:06:45.837270937Z" level=info msg="Container 49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:45.859580 containerd[1711]: time="2025-12-16T13:06:45.859551236Z" level=info msg="CreateContainer within sandbox \"0b29f9833c3d3c25da2eae5b1bfce54ca5db8e7d90c10d8f06a30465328f5c4c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e\"" Dec 16 13:06:45.860180 containerd[1711]: time="2025-12-16T13:06:45.860154998Z" level=info msg="StartContainer for \"49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e\"" Dec 16 13:06:45.861740 containerd[1711]: time="2025-12-16T13:06:45.861698258Z" level=info msg="connecting to shim 49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e" address="unix:///run/containerd/s/496e43fd68fad8f82439daeca5bc8c371ef509ba60e538cde5a39a1a46c199ce" protocol=ttrpc version=3 Dec 16 13:06:45.880559 systemd[1]: Started cri-containerd-49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e.scope - libcontainer container 49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e. Dec 16 13:06:45.953075 containerd[1711]: time="2025-12-16T13:06:45.952797500Z" level=info msg="StartContainer for \"49ee44a688420ba609fc56c6aeab39e86e58ba057ead4fb152f4ece3cfa0f59e\" returns successfully" Dec 16 13:06:45.974576 systemd[1]: Created slice kubepods-besteffort-pode6832cb5_85aa_418e_a57e_41795b36701b.slice - libcontainer container kubepods-besteffort-pode6832cb5_85aa_418e_a57e_41795b36701b.slice. Dec 16 13:06:46.024808 kubelet[3177]: I1216 13:06:46.024774 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqwv\" (UniqueName: \"kubernetes.io/projected/e6832cb5-85aa-418e-a57e-41795b36701b-kube-api-access-5dqwv\") pod \"tigera-operator-65cdcdfd6d-c8wh4\" (UID: \"e6832cb5-85aa-418e-a57e-41795b36701b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-c8wh4" Dec 16 13:06:46.024808 kubelet[3177]: I1216 13:06:46.024813 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6832cb5-85aa-418e-a57e-41795b36701b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-c8wh4\" (UID: \"e6832cb5-85aa-418e-a57e-41795b36701b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-c8wh4" Dec 16 13:06:46.284286 containerd[1711]: time="2025-12-16T13:06:46.284194179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-c8wh4,Uid:e6832cb5-85aa-418e-a57e-41795b36701b,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:06:46.325456 containerd[1711]: time="2025-12-16T13:06:46.324869096Z" level=info msg="connecting to shim 9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22" address="unix:///run/containerd/s/a6af3663dfd150aa44f43f0189e5fc3e5a096b4af3287aa71577ff8af246a324" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:46.352550 systemd[1]: Started cri-containerd-9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22.scope - libcontainer container 9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22. Dec 16 13:06:46.393463 containerd[1711]: time="2025-12-16T13:06:46.393377122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-c8wh4,Uid:e6832cb5-85aa-418e-a57e-41795b36701b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22\"" Dec 16 13:06:46.395353 containerd[1711]: time="2025-12-16T13:06:46.394847453Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:06:47.419586 kubelet[3177]: I1216 13:06:47.419224 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4429b" podStartSLOduration=2.419207047 podStartE2EDuration="2.419207047s" podCreationTimestamp="2025-12-16 13:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:46.954700162 +0000 UTC m=+7.148198373" watchObservedRunningTime="2025-12-16 13:06:47.419207047 +0000 UTC m=+7.612705286" Dec 16 13:06:48.455765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438797420.mount: Deactivated successfully. Dec 16 13:06:48.932726 containerd[1711]: time="2025-12-16T13:06:48.932497953Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:48.936853 containerd[1711]: time="2025-12-16T13:06:48.936741253Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:06:48.940996 containerd[1711]: time="2025-12-16T13:06:48.940969637Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:48.948336 containerd[1711]: time="2025-12-16T13:06:48.945973935Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:06:48.948336 containerd[1711]: time="2025-12-16T13:06:48.945984293Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.551107317s" Dec 16 13:06:48.948336 containerd[1711]: time="2025-12-16T13:06:48.946130574Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:06:48.979622 containerd[1711]: time="2025-12-16T13:06:48.979570254Z" level=info msg="CreateContainer within sandbox \"9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:06:48.999261 containerd[1711]: time="2025-12-16T13:06:48.998670651Z" level=info msg="Container 9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:49.012788 containerd[1711]: time="2025-12-16T13:06:49.012757757Z" level=info msg="CreateContainer within sandbox \"9ff254e3be5b328d655cb2b98fcd630e121929610468f028de3a5e555c01af22\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1\"" Dec 16 13:06:49.013302 containerd[1711]: time="2025-12-16T13:06:49.013278702Z" level=info msg="StartContainer for \"9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1\"" Dec 16 13:06:49.014260 containerd[1711]: time="2025-12-16T13:06:49.014209832Z" level=info msg="connecting to shim 9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1" address="unix:///run/containerd/s/a6af3663dfd150aa44f43f0189e5fc3e5a096b4af3287aa71577ff8af246a324" protocol=ttrpc version=3 Dec 16 13:06:49.032545 systemd[1]: Started cri-containerd-9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1.scope - libcontainer container 9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1. Dec 16 13:06:49.067126 containerd[1711]: time="2025-12-16T13:06:49.067101946Z" level=info msg="StartContainer for \"9ffa8901ffa4905c216032c7ec9c8eb15813c727355a1de73667727fdaf1e6e1\" returns successfully" Dec 16 13:06:49.960857 kubelet[3177]: I1216 13:06:49.960708 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-c8wh4" podStartSLOduration=2.406094372 podStartE2EDuration="4.960370257s" podCreationTimestamp="2025-12-16 13:06:45 +0000 UTC" firstStartedPulling="2025-12-16 13:06:46.394518992 +0000 UTC m=+6.588017184" lastFinishedPulling="2025-12-16 13:06:48.948794872 +0000 UTC m=+9.142293069" observedRunningTime="2025-12-16 13:06:49.960245082 +0000 UTC m=+10.153743316" watchObservedRunningTime="2025-12-16 13:06:49.960370257 +0000 UTC m=+10.153868463" Dec 16 13:06:54.814700 sudo[2171]: pam_unix(sudo:session): session closed for user root Dec 16 13:06:54.903378 sshd[2170]: Connection closed by 10.200.16.10 port 37436 Dec 16 13:06:54.903898 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:54.907865 systemd[1]: sshd@6-10.200.0.33:22-10.200.16.10:37436.service: Deactivated successfully. Dec 16 13:06:54.911265 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:06:54.911872 systemd[1]: session-9.scope: Consumed 3.912s CPU time, 228.5M memory peak. Dec 16 13:06:54.914790 systemd-logind[1691]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:06:54.916868 systemd-logind[1691]: Removed session 9. Dec 16 13:06:59.695744 systemd[1]: Created slice kubepods-besteffort-poddc0200d5_dc86_4f93_adfe_e426a95b8fe6.slice - libcontainer container kubepods-besteffort-poddc0200d5_dc86_4f93_adfe_e426a95b8fe6.slice. Dec 16 13:06:59.707551 kubelet[3177]: I1216 13:06:59.707513 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dc0200d5-dc86-4f93-adfe-e426a95b8fe6-typha-certs\") pod \"calico-typha-6f5b4ccb87-qft5c\" (UID: \"dc0200d5-dc86-4f93-adfe-e426a95b8fe6\") " pod="calico-system/calico-typha-6f5b4ccb87-qft5c" Dec 16 13:06:59.707551 kubelet[3177]: I1216 13:06:59.707560 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc0200d5-dc86-4f93-adfe-e426a95b8fe6-tigera-ca-bundle\") pod \"calico-typha-6f5b4ccb87-qft5c\" (UID: \"dc0200d5-dc86-4f93-adfe-e426a95b8fe6\") " pod="calico-system/calico-typha-6f5b4ccb87-qft5c" Dec 16 13:06:59.707551 kubelet[3177]: I1216 13:06:59.707583 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bk66\" (UniqueName: \"kubernetes.io/projected/dc0200d5-dc86-4f93-adfe-e426a95b8fe6-kube-api-access-7bk66\") pod \"calico-typha-6f5b4ccb87-qft5c\" (UID: \"dc0200d5-dc86-4f93-adfe-e426a95b8fe6\") " pod="calico-system/calico-typha-6f5b4ccb87-qft5c" Dec 16 13:06:59.904883 systemd[1]: Created slice kubepods-besteffort-pod9d02819f_9a3e_4afb_8414_1e96b1fe2e87.slice - libcontainer container kubepods-besteffort-pod9d02819f_9a3e_4afb_8414_1e96b1fe2e87.slice. Dec 16 13:07:00.004607 containerd[1711]: time="2025-12-16T13:07:00.004497258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5b4ccb87-qft5c,Uid:dc0200d5-dc86-4f93-adfe-e426a95b8fe6,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:00.009893 kubelet[3177]: I1216 13:07:00.009429 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-policysync\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.009893 kubelet[3177]: I1216 13:07:00.009503 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-lib-modules\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.009893 kubelet[3177]: I1216 13:07:00.009523 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-tigera-ca-bundle\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.009893 kubelet[3177]: I1216 13:07:00.009543 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-flexvol-driver-host\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.009893 kubelet[3177]: I1216 13:07:00.009618 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-var-run-calico\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010087 kubelet[3177]: I1216 13:07:00.009656 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-xtables-lock\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010087 kubelet[3177]: I1216 13:07:00.009674 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwplk\" (UniqueName: \"kubernetes.io/projected/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-kube-api-access-dwplk\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010087 kubelet[3177]: I1216 13:07:00.009692 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-cni-log-dir\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010087 kubelet[3177]: I1216 13:07:00.009745 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-cni-net-dir\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010087 kubelet[3177]: I1216 13:07:00.009762 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-var-lib-calico\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010178 kubelet[3177]: I1216 13:07:00.009779 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-cni-bin-dir\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.010178 kubelet[3177]: I1216 13:07:00.009843 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d02819f-9a3e-4afb-8414-1e96b1fe2e87-node-certs\") pod \"calico-node-zt8g8\" (UID: \"9d02819f-9a3e-4afb-8414-1e96b1fe2e87\") " pod="calico-system/calico-node-zt8g8" Dec 16 13:07:00.060596 containerd[1711]: time="2025-12-16T13:07:00.060550709Z" level=info msg="connecting to shim a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7" address="unix:///run/containerd/s/4cd69ec1b1e992a1cd3082a16b790628df9403e6a3658ee0c0c835a18ee28d1c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:00.074933 kubelet[3177]: E1216 13:07:00.074894 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:00.089540 systemd[1]: Started cri-containerd-a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7.scope - libcontainer container a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7. Dec 16 13:07:00.110115 kubelet[3177]: I1216 13:07:00.110079 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ccba9c4c-4f0e-4c2b-88e7-422574903af0-socket-dir\") pod \"csi-node-driver-w7769\" (UID: \"ccba9c4c-4f0e-4c2b-88e7-422574903af0\") " pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:00.110115 kubelet[3177]: I1216 13:07:00.110116 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ccba9c4c-4f0e-4c2b-88e7-422574903af0-varrun\") pod \"csi-node-driver-w7769\" (UID: \"ccba9c4c-4f0e-4c2b-88e7-422574903af0\") " pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:00.110265 kubelet[3177]: I1216 13:07:00.110149 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ccba9c4c-4f0e-4c2b-88e7-422574903af0-registration-dir\") pod \"csi-node-driver-w7769\" (UID: \"ccba9c4c-4f0e-4c2b-88e7-422574903af0\") " pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:00.110265 kubelet[3177]: I1216 13:07:00.110193 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccba9c4c-4f0e-4c2b-88e7-422574903af0-kubelet-dir\") pod \"csi-node-driver-w7769\" (UID: \"ccba9c4c-4f0e-4c2b-88e7-422574903af0\") " pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:00.110265 kubelet[3177]: I1216 13:07:00.110209 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8m4\" (UniqueName: \"kubernetes.io/projected/ccba9c4c-4f0e-4c2b-88e7-422574903af0-kube-api-access-9p8m4\") pod \"csi-node-driver-w7769\" (UID: \"ccba9c4c-4f0e-4c2b-88e7-422574903af0\") " pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:00.116155 kubelet[3177]: E1216 13:07:00.116118 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.116155 kubelet[3177]: W1216 13:07:00.116142 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.116356 kubelet[3177]: E1216 13:07:00.116161 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.119956 kubelet[3177]: E1216 13:07:00.119937 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.119956 kubelet[3177]: W1216 13:07:00.119954 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.120080 kubelet[3177]: E1216 13:07:00.119971 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.127918 kubelet[3177]: E1216 13:07:00.127862 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.127918 kubelet[3177]: W1216 13:07:00.127874 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.127918 kubelet[3177]: E1216 13:07:00.127893 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.169670 containerd[1711]: time="2025-12-16T13:07:00.169583327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f5b4ccb87-qft5c,Uid:dc0200d5-dc86-4f93-adfe-e426a95b8fe6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7\"" Dec 16 13:07:00.171826 containerd[1711]: time="2025-12-16T13:07:00.171740920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:07:00.210649 kubelet[3177]: E1216 13:07:00.210626 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.210649 kubelet[3177]: W1216 13:07:00.210644 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.210926 kubelet[3177]: E1216 13:07:00.210662 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.210926 kubelet[3177]: E1216 13:07:00.210803 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.210926 kubelet[3177]: W1216 13:07:00.210810 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.210926 kubelet[3177]: E1216 13:07:00.210819 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.211083 kubelet[3177]: E1216 13:07:00.211065 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.211131 kubelet[3177]: W1216 13:07:00.211117 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.211229 kubelet[3177]: E1216 13:07:00.211130 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.211359 kubelet[3177]: E1216 13:07:00.211348 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.211566 kubelet[3177]: W1216 13:07:00.211360 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.211566 kubelet[3177]: E1216 13:07:00.211371 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.211566 kubelet[3177]: E1216 13:07:00.211535 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.211566 kubelet[3177]: W1216 13:07:00.211541 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.211566 kubelet[3177]: E1216 13:07:00.211549 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.211806 kubelet[3177]: E1216 13:07:00.211794 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.211806 kubelet[3177]: W1216 13:07:00.211804 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.211902 kubelet[3177]: E1216 13:07:00.211812 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.212057 kubelet[3177]: E1216 13:07:00.211944 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.212057 kubelet[3177]: W1216 13:07:00.211952 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.212057 kubelet[3177]: E1216 13:07:00.211959 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.212176 kubelet[3177]: E1216 13:07:00.212078 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.212176 kubelet[3177]: W1216 13:07:00.212098 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.212176 kubelet[3177]: E1216 13:07:00.212105 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.212319 kubelet[3177]: E1216 13:07:00.212205 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.212319 kubelet[3177]: W1216 13:07:00.212210 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.212319 kubelet[3177]: E1216 13:07:00.212216 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.212458 kubelet[3177]: E1216 13:07:00.212331 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.212458 kubelet[3177]: W1216 13:07:00.212337 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.212458 kubelet[3177]: E1216 13:07:00.212343 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212470 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.213982 kubelet[3177]: W1216 13:07:00.212491 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212498 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212598 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.213982 kubelet[3177]: W1216 13:07:00.212603 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212607 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212715 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.213982 kubelet[3177]: W1216 13:07:00.212722 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212730 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.213982 kubelet[3177]: E1216 13:07:00.212836 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214211 kubelet[3177]: W1216 13:07:00.212841 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.212845 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.212945 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214211 kubelet[3177]: W1216 13:07:00.212950 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.212956 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.213049 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214211 kubelet[3177]: W1216 13:07:00.213053 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.213058 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214211 kubelet[3177]: E1216 13:07:00.213151 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214211 kubelet[3177]: W1216 13:07:00.213156 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213160 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213251 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214506 kubelet[3177]: W1216 13:07:00.213255 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213260 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213379 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214506 kubelet[3177]: W1216 13:07:00.213386 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213415 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213532 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214506 kubelet[3177]: W1216 13:07:00.213536 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214506 kubelet[3177]: E1216 13:07:00.213540 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213644 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214689 kubelet[3177]: W1216 13:07:00.213649 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213655 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213751 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214689 kubelet[3177]: W1216 13:07:00.213755 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213760 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213939 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214689 kubelet[3177]: W1216 13:07:00.213945 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.213952 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214689 kubelet[3177]: E1216 13:07:00.214123 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214841 kubelet[3177]: W1216 13:07:00.214129 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214841 kubelet[3177]: E1216 13:07:00.214136 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.214841 kubelet[3177]: E1216 13:07:00.214524 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.214841 kubelet[3177]: W1216 13:07:00.214533 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.214841 kubelet[3177]: E1216 13:07:00.214543 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.215993 containerd[1711]: time="2025-12-16T13:07:00.215964486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zt8g8,Uid:9d02819f-9a3e-4afb-8414-1e96b1fe2e87,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:00.226133 kubelet[3177]: E1216 13:07:00.226115 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:00.226133 kubelet[3177]: W1216 13:07:00.226132 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:00.226225 kubelet[3177]: E1216 13:07:00.226146 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:00.276611 containerd[1711]: time="2025-12-16T13:07:00.275766386Z" level=info msg="connecting to shim d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81" address="unix:///run/containerd/s/61fb43751e25d72729ab62f568516fa36da265bfc949da67e7245fa5a611d54f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:00.297599 systemd[1]: Started cri-containerd-d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81.scope - libcontainer container d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81. Dec 16 13:07:00.324924 containerd[1711]: time="2025-12-16T13:07:00.324892824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zt8g8,Uid:9d02819f-9a3e-4afb-8414-1e96b1fe2e87,Namespace:calico-system,Attempt:0,} returns sandbox id \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\"" Dec 16 13:07:01.717815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852419343.mount: Deactivated successfully. Dec 16 13:07:01.904755 kubelet[3177]: E1216 13:07:01.904128 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:02.279665 containerd[1711]: time="2025-12-16T13:07:02.279166740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:02.282457 containerd[1711]: time="2025-12-16T13:07:02.282430005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:07:02.286232 containerd[1711]: time="2025-12-16T13:07:02.286188448Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:02.289888 containerd[1711]: time="2025-12-16T13:07:02.289845037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:02.290355 containerd[1711]: time="2025-12-16T13:07:02.290237432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.118465509s" Dec 16 13:07:02.290355 containerd[1711]: time="2025-12-16T13:07:02.290265727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:07:02.291424 containerd[1711]: time="2025-12-16T13:07:02.291223633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:07:02.309117 containerd[1711]: time="2025-12-16T13:07:02.309094159Z" level=info msg="CreateContainer within sandbox \"a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:07:02.331514 containerd[1711]: time="2025-12-16T13:07:02.328543722Z" level=info msg="Container 18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:02.365217 containerd[1711]: time="2025-12-16T13:07:02.365188431Z" level=info msg="CreateContainer within sandbox \"a32a86e56e77c1752b8d0d64d0b8ddc0abb91f6e3169adfdd21e633b01a58ac7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85\"" Dec 16 13:07:02.365578 containerd[1711]: time="2025-12-16T13:07:02.365556293Z" level=info msg="StartContainer for \"18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85\"" Dec 16 13:07:02.366879 containerd[1711]: time="2025-12-16T13:07:02.366836734Z" level=info msg="connecting to shim 18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85" address="unix:///run/containerd/s/4cd69ec1b1e992a1cd3082a16b790628df9403e6a3658ee0c0c835a18ee28d1c" protocol=ttrpc version=3 Dec 16 13:07:02.391545 systemd[1]: Started cri-containerd-18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85.scope - libcontainer container 18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85. Dec 16 13:07:02.440353 containerd[1711]: time="2025-12-16T13:07:02.440322616Z" level=info msg="StartContainer for \"18e7f51b0f036e7432126e91e70ac5bb86ef11869a813a4c0e8174639c862a85\" returns successfully" Dec 16 13:07:02.985343 kubelet[3177]: I1216 13:07:02.985201 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f5b4ccb87-qft5c" podStartSLOduration=1.865105644 podStartE2EDuration="3.985173329s" podCreationTimestamp="2025-12-16 13:06:59 +0000 UTC" firstStartedPulling="2025-12-16 13:07:00.171014007 +0000 UTC m=+20.364512212" lastFinishedPulling="2025-12-16 13:07:02.291081692 +0000 UTC m=+22.484579897" observedRunningTime="2025-12-16 13:07:02.98501198 +0000 UTC m=+23.178510180" watchObservedRunningTime="2025-12-16 13:07:02.985173329 +0000 UTC m=+23.178671519" Dec 16 13:07:03.009419 kubelet[3177]: E1216 13:07:03.009372 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.009653 kubelet[3177]: W1216 13:07:03.009548 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.009653 kubelet[3177]: E1216 13:07:03.009574 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.009820 kubelet[3177]: E1216 13:07:03.009797 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.009820 kubelet[3177]: W1216 13:07:03.009817 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.009908 kubelet[3177]: E1216 13:07:03.009827 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.009950 kubelet[3177]: E1216 13:07:03.009939 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.009950 kubelet[3177]: W1216 13:07:03.009945 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010024 kubelet[3177]: E1216 13:07:03.009952 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.010182 kubelet[3177]: E1216 13:07:03.010168 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.010182 kubelet[3177]: W1216 13:07:03.010180 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010267 kubelet[3177]: E1216 13:07:03.010189 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.010322 kubelet[3177]: E1216 13:07:03.010314 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.010322 kubelet[3177]: W1216 13:07:03.010321 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010480 kubelet[3177]: E1216 13:07:03.010328 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.010480 kubelet[3177]: E1216 13:07:03.010465 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.010480 kubelet[3177]: W1216 13:07:03.010471 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010558 kubelet[3177]: E1216 13:07:03.010485 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.010713 kubelet[3177]: E1216 13:07:03.010621 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.010713 kubelet[3177]: W1216 13:07:03.010642 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010713 kubelet[3177]: E1216 13:07:03.010650 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.010833 kubelet[3177]: E1216 13:07:03.010818 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.010833 kubelet[3177]: W1216 13:07:03.010829 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.010888 kubelet[3177]: E1216 13:07:03.010837 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011008 kubelet[3177]: E1216 13:07:03.010987 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.011054 kubelet[3177]: W1216 13:07:03.011008 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.011054 kubelet[3177]: E1216 13:07:03.011015 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011158 kubelet[3177]: E1216 13:07:03.011110 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.011158 kubelet[3177]: W1216 13:07:03.011115 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.011158 kubelet[3177]: E1216 13:07:03.011121 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011238 kubelet[3177]: E1216 13:07:03.011209 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.011238 kubelet[3177]: W1216 13:07:03.011214 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.011238 kubelet[3177]: E1216 13:07:03.011219 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011622 kubelet[3177]: E1216 13:07:03.011360 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.011622 kubelet[3177]: W1216 13:07:03.011367 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.011622 kubelet[3177]: E1216 13:07:03.011373 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011622 kubelet[3177]: E1216 13:07:03.011512 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.011622 kubelet[3177]: W1216 13:07:03.011518 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.011622 kubelet[3177]: E1216 13:07:03.011526 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.011622 kubelet[3177]: E1216 13:07:03.011623 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.012235 kubelet[3177]: W1216 13:07:03.011628 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.012235 kubelet[3177]: E1216 13:07:03.011635 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.012235 kubelet[3177]: E1216 13:07:03.011729 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.012235 kubelet[3177]: W1216 13:07:03.011734 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.012235 kubelet[3177]: E1216 13:07:03.011740 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032061 kubelet[3177]: E1216 13:07:03.032040 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032061 kubelet[3177]: W1216 13:07:03.032057 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032186 kubelet[3177]: E1216 13:07:03.032071 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032231 kubelet[3177]: E1216 13:07:03.032212 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032231 kubelet[3177]: W1216 13:07:03.032218 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032231 kubelet[3177]: E1216 13:07:03.032226 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032396 kubelet[3177]: E1216 13:07:03.032383 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032424 kubelet[3177]: W1216 13:07:03.032411 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032424 kubelet[3177]: E1216 13:07:03.032420 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032554 kubelet[3177]: E1216 13:07:03.032531 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032554 kubelet[3177]: W1216 13:07:03.032549 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032619 kubelet[3177]: E1216 13:07:03.032556 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032683 kubelet[3177]: E1216 13:07:03.032675 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032709 kubelet[3177]: W1216 13:07:03.032682 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032709 kubelet[3177]: E1216 13:07:03.032689 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.032856 kubelet[3177]: E1216 13:07:03.032845 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.032856 kubelet[3177]: W1216 13:07:03.032854 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.032911 kubelet[3177]: E1216 13:07:03.032861 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.033340 kubelet[3177]: E1216 13:07:03.033320 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.033340 kubelet[3177]: W1216 13:07:03.033337 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.033438 kubelet[3177]: E1216 13:07:03.033348 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.033519 kubelet[3177]: E1216 13:07:03.033506 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.033519 kubelet[3177]: W1216 13:07:03.033518 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.033569 kubelet[3177]: E1216 13:07:03.033526 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.033720 kubelet[3177]: E1216 13:07:03.033702 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.033720 kubelet[3177]: W1216 13:07:03.033718 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.033772 kubelet[3177]: E1216 13:07:03.033727 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.033865 kubelet[3177]: E1216 13:07:03.033853 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.033865 kubelet[3177]: W1216 13:07:03.033861 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.033923 kubelet[3177]: E1216 13:07:03.033868 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.033985 kubelet[3177]: E1216 13:07:03.033975 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.033985 kubelet[3177]: W1216 13:07:03.033983 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034033 kubelet[3177]: E1216 13:07:03.033990 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034112 kubelet[3177]: E1216 13:07:03.034099 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034112 kubelet[3177]: W1216 13:07:03.034106 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034159 kubelet[3177]: E1216 13:07:03.034113 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034270 kubelet[3177]: E1216 13:07:03.034258 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034270 kubelet[3177]: W1216 13:07:03.034265 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034315 kubelet[3177]: E1216 13:07:03.034272 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034523 kubelet[3177]: E1216 13:07:03.034511 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034523 kubelet[3177]: W1216 13:07:03.034519 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034582 kubelet[3177]: E1216 13:07:03.034526 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034654 kubelet[3177]: E1216 13:07:03.034641 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034654 kubelet[3177]: W1216 13:07:03.034648 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034702 kubelet[3177]: E1216 13:07:03.034655 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034780 kubelet[3177]: E1216 13:07:03.034759 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034780 kubelet[3177]: W1216 13:07:03.034777 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.034827 kubelet[3177]: E1216 13:07:03.034783 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.034964 kubelet[3177]: E1216 13:07:03.034953 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.034964 kubelet[3177]: W1216 13:07:03.034961 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.035017 kubelet[3177]: E1216 13:07:03.034967 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.035152 kubelet[3177]: E1216 13:07:03.035131 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:07:03.035180 kubelet[3177]: W1216 13:07:03.035154 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:07:03.035180 kubelet[3177]: E1216 13:07:03.035161 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:07:03.719125 containerd[1711]: time="2025-12-16T13:07:03.719083708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:03.722077 containerd[1711]: time="2025-12-16T13:07:03.722032317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:07:03.725099 containerd[1711]: time="2025-12-16T13:07:03.725072512Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:03.730004 containerd[1711]: time="2025-12-16T13:07:03.729608120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:03.730004 containerd[1711]: time="2025-12-16T13:07:03.729893683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.438641793s" Dec 16 13:07:03.730004 containerd[1711]: time="2025-12-16T13:07:03.729928688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:07:03.736984 containerd[1711]: time="2025-12-16T13:07:03.736954713Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:07:03.761372 containerd[1711]: time="2025-12-16T13:07:03.761258735Z" level=info msg="Container d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:03.780722 containerd[1711]: time="2025-12-16T13:07:03.780695797Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58\"" Dec 16 13:07:03.781432 containerd[1711]: time="2025-12-16T13:07:03.781407144Z" level=info msg="StartContainer for \"d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58\"" Dec 16 13:07:03.782733 containerd[1711]: time="2025-12-16T13:07:03.782673118Z" level=info msg="connecting to shim d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58" address="unix:///run/containerd/s/61fb43751e25d72729ab62f568516fa36da265bfc949da67e7245fa5a611d54f" protocol=ttrpc version=3 Dec 16 13:07:03.805555 systemd[1]: Started cri-containerd-d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58.scope - libcontainer container d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58. Dec 16 13:07:03.867621 containerd[1711]: time="2025-12-16T13:07:03.867593361Z" level=info msg="StartContainer for \"d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58\" returns successfully" Dec 16 13:07:03.868076 systemd[1]: cri-containerd-d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58.scope: Deactivated successfully. Dec 16 13:07:03.872066 containerd[1711]: time="2025-12-16T13:07:03.872023334Z" level=info msg="received container exit event container_id:\"d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58\" id:\"d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58\" pid:3809 exited_at:{seconds:1765890423 nanos:871200434}" Dec 16 13:07:03.890444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d051a44f8741353cf47da52595d618fd65ef7e27c889a466b8ec88d5c4cabd58-rootfs.mount: Deactivated successfully. Dec 16 13:07:03.904173 kubelet[3177]: E1216 13:07:03.904138 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:04.005946 kubelet[3177]: I1216 13:07:03.977109 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:07:05.905434 kubelet[3177]: E1216 13:07:05.904517 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:05.985287 containerd[1711]: time="2025-12-16T13:07:05.985200441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:07:07.904838 kubelet[3177]: E1216 13:07:07.904035 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:08.639483 containerd[1711]: time="2025-12-16T13:07:08.639443872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:08.642844 containerd[1711]: time="2025-12-16T13:07:08.642728249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:07:08.646103 containerd[1711]: time="2025-12-16T13:07:08.646073158Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:08.650076 containerd[1711]: time="2025-12-16T13:07:08.650032180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:08.650770 containerd[1711]: time="2025-12-16T13:07:08.650460708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.665093923s" Dec 16 13:07:08.650770 containerd[1711]: time="2025-12-16T13:07:08.650488465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:07:08.658688 containerd[1711]: time="2025-12-16T13:07:08.658660174Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:07:08.682999 containerd[1711]: time="2025-12-16T13:07:08.682008454Z" level=info msg="Container d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:08.686377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216184886.mount: Deactivated successfully. Dec 16 13:07:08.702287 containerd[1711]: time="2025-12-16T13:07:08.702263061Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2\"" Dec 16 13:07:08.703446 containerd[1711]: time="2025-12-16T13:07:08.702614939Z" level=info msg="StartContainer for \"d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2\"" Dec 16 13:07:08.704284 containerd[1711]: time="2025-12-16T13:07:08.704252533Z" level=info msg="connecting to shim d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2" address="unix:///run/containerd/s/61fb43751e25d72729ab62f568516fa36da265bfc949da67e7245fa5a611d54f" protocol=ttrpc version=3 Dec 16 13:07:08.727558 systemd[1]: Started cri-containerd-d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2.scope - libcontainer container d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2. Dec 16 13:07:08.779703 containerd[1711]: time="2025-12-16T13:07:08.779676203Z" level=info msg="StartContainer for \"d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2\" returns successfully" Dec 16 13:07:09.873169 systemd[1]: cri-containerd-d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2.scope: Deactivated successfully. Dec 16 13:07:09.873786 systemd[1]: cri-containerd-d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2.scope: Consumed 404ms CPU time, 192.9M memory peak, 171.3M written to disk. Dec 16 13:07:09.875058 containerd[1711]: time="2025-12-16T13:07:09.874886094Z" level=info msg="received container exit event container_id:\"d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2\" id:\"d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2\" pid:3869 exited_at:{seconds:1765890429 nanos:874595178}" Dec 16 13:07:09.894076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d14b27825e25357eb1b79f5b24387025813b350a1ba3e710fd84c0ce047686c2-rootfs.mount: Deactivated successfully. Dec 16 13:07:09.907639 kubelet[3177]: E1216 13:07:09.907605 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:09.943117 kubelet[3177]: I1216 13:07:09.943083 3177 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:07:10.154829 systemd[1]: Created slice kubepods-burstable-pod8ace8d57_2637_433f_b5cf_8ad4a3667131.slice - libcontainer container kubepods-burstable-pod8ace8d57_2637_433f_b5cf_8ad4a3667131.slice. Dec 16 13:07:10.301342 kubelet[3177]: I1216 13:07:10.176995 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbbk\" (UniqueName: \"kubernetes.io/projected/8ace8d57-2637-433f-b5cf-8ad4a3667131-kube-api-access-dcbbk\") pod \"coredns-66bc5c9577-xg8cv\" (UID: \"8ace8d57-2637-433f-b5cf-8ad4a3667131\") " pod="kube-system/coredns-66bc5c9577-xg8cv" Dec 16 13:07:10.301342 kubelet[3177]: I1216 13:07:10.177020 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ace8d57-2637-433f-b5cf-8ad4a3667131-config-volume\") pod \"coredns-66bc5c9577-xg8cv\" (UID: \"8ace8d57-2637-433f-b5cf-8ad4a3667131\") " pod="kube-system/coredns-66bc5c9577-xg8cv" Dec 16 13:07:10.500362 systemd[1]: Created slice kubepods-besteffort-pod8b80324f_a72e_4138_8ef4_af2e0235c136.slice - libcontainer container kubepods-besteffort-pod8b80324f_a72e_4138_8ef4_af2e0235c136.slice. Dec 16 13:07:10.579419 kubelet[3177]: I1216 13:07:10.579231 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-backend-key-pair\") pod \"whisker-844559678-xvg5z\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " pod="calico-system/whisker-844559678-xvg5z" Dec 16 13:07:10.579419 kubelet[3177]: I1216 13:07:10.579276 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-ca-bundle\") pod \"whisker-844559678-xvg5z\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " pod="calico-system/whisker-844559678-xvg5z" Dec 16 13:07:10.579419 kubelet[3177]: I1216 13:07:10.579298 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xwx4\" (UniqueName: \"kubernetes.io/projected/8b80324f-a72e-4138-8ef4-af2e0235c136-kube-api-access-2xwx4\") pod \"whisker-844559678-xvg5z\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " pod="calico-system/whisker-844559678-xvg5z" Dec 16 13:07:10.747734 containerd[1711]: time="2025-12-16T13:07:10.747613166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xg8cv,Uid:8ace8d57-2637-433f-b5cf-8ad4a3667131,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:10.776037 systemd[1]: Created slice kubepods-burstable-pod317b8a1f_8f93_487a_a0c5_8114cd9eb845.slice - libcontainer container kubepods-burstable-pod317b8a1f_8f93_487a_a0c5_8114cd9eb845.slice. Dec 16 13:07:10.795351 systemd[1]: Created slice kubepods-besteffort-pod9e221d7a_639b_4bcb_8508_8080960234ac.slice - libcontainer container kubepods-besteffort-pod9e221d7a_639b_4bcb_8508_8080960234ac.slice. Dec 16 13:07:10.804738 systemd[1]: Created slice kubepods-besteffort-pod7a82fe54_15b6_44bf_9df4_aa8e33fe1999.slice - libcontainer container kubepods-besteffort-pod7a82fe54_15b6_44bf_9df4_aa8e33fe1999.slice. Dec 16 13:07:10.815857 containerd[1711]: time="2025-12-16T13:07:10.815478510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844559678-xvg5z,Uid:8b80324f-a72e-4138-8ef4-af2e0235c136,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:10.826280 systemd[1]: Created slice kubepods-besteffort-pod6a37c451_a2e0_4310_89e9_a7160f2123e5.slice - libcontainer container kubepods-besteffort-pod6a37c451_a2e0_4310_89e9_a7160f2123e5.slice. Dec 16 13:07:10.838541 systemd[1]: Created slice kubepods-besteffort-podea5faaca_0d4e_431d_9277_cb31c23101e9.slice - libcontainer container kubepods-besteffort-podea5faaca_0d4e_431d_9277_cb31c23101e9.slice. Dec 16 13:07:10.875451 containerd[1711]: time="2025-12-16T13:07:10.875258589Z" level=error msg="Failed to destroy network for sandbox \"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.878937 containerd[1711]: time="2025-12-16T13:07:10.878868959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xg8cv,Uid:8ace8d57-2637-433f-b5cf-8ad4a3667131,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.879619 kubelet[3177]: E1216 13:07:10.879074 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.879619 kubelet[3177]: E1216 13:07:10.879120 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xg8cv" Dec 16 13:07:10.879619 kubelet[3177]: E1216 13:07:10.879138 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xg8cv" Dec 16 13:07:10.879854 kubelet[3177]: E1216 13:07:10.879186 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xg8cv_kube-system(8ace8d57-2637-433f-b5cf-8ad4a3667131)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xg8cv_kube-system(8ace8d57-2637-433f-b5cf-8ad4a3667131)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"340af06cc7d5a49cb5a04707620574d13a2a3ff6554c21eaf0c1a14ee2f63d42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xg8cv" podUID="8ace8d57-2637-433f-b5cf-8ad4a3667131" Dec 16 13:07:10.882609 kubelet[3177]: I1216 13:07:10.882581 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/317b8a1f-8f93-487a-a0c5-8114cd9eb845-config-volume\") pod \"coredns-66bc5c9577-8sth5\" (UID: \"317b8a1f-8f93-487a-a0c5-8114cd9eb845\") " pod="kube-system/coredns-66bc5c9577-8sth5" Dec 16 13:07:10.882787 kubelet[3177]: I1216 13:07:10.882616 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmrx\" (UniqueName: \"kubernetes.io/projected/317b8a1f-8f93-487a-a0c5-8114cd9eb845-kube-api-access-ljmrx\") pod \"coredns-66bc5c9577-8sth5\" (UID: \"317b8a1f-8f93-487a-a0c5-8114cd9eb845\") " pod="kube-system/coredns-66bc5c9577-8sth5" Dec 16 13:07:10.882787 kubelet[3177]: I1216 13:07:10.882638 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh65\" (UniqueName: \"kubernetes.io/projected/7a82fe54-15b6-44bf-9df4-aa8e33fe1999-kube-api-access-qjh65\") pod \"calico-apiserver-5d6f84fc95-wd4m8\" (UID: \"7a82fe54-15b6-44bf-9df4-aa8e33fe1999\") " pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" Dec 16 13:07:10.882787 kubelet[3177]: I1216 13:07:10.882659 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a37c451-a2e0-4310-89e9-a7160f2123e5-calico-apiserver-certs\") pod \"calico-apiserver-5d6f84fc95-w97rk\" (UID: \"6a37c451-a2e0-4310-89e9-a7160f2123e5\") " pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" Dec 16 13:07:10.882787 kubelet[3177]: I1216 13:07:10.882677 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e221d7a-639b-4bcb-8508-8080960234ac-tigera-ca-bundle\") pod \"calico-kube-controllers-66fdf94b9c-fggbk\" (UID: \"9e221d7a-639b-4bcb-8508-8080960234ac\") " pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" Dec 16 13:07:10.882787 kubelet[3177]: I1216 13:07:10.882693 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cx4\" (UniqueName: \"kubernetes.io/projected/9e221d7a-639b-4bcb-8508-8080960234ac-kube-api-access-72cx4\") pod \"calico-kube-controllers-66fdf94b9c-fggbk\" (UID: \"9e221d7a-639b-4bcb-8508-8080960234ac\") " pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" Dec 16 13:07:10.883127 kubelet[3177]: I1216 13:07:10.882722 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea5faaca-0d4e-431d-9277-cb31c23101e9-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-4jzfv\" (UID: \"ea5faaca-0d4e-431d-9277-cb31c23101e9\") " pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:10.883127 kubelet[3177]: I1216 13:07:10.882739 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rgd\" (UniqueName: \"kubernetes.io/projected/ea5faaca-0d4e-431d-9277-cb31c23101e9-kube-api-access-65rgd\") pod \"goldmane-7c778bb748-4jzfv\" (UID: \"ea5faaca-0d4e-431d-9277-cb31c23101e9\") " pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:10.883127 kubelet[3177]: I1216 13:07:10.882762 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzb4c\" (UniqueName: \"kubernetes.io/projected/6a37c451-a2e0-4310-89e9-a7160f2123e5-kube-api-access-bzb4c\") pod \"calico-apiserver-5d6f84fc95-w97rk\" (UID: \"6a37c451-a2e0-4310-89e9-a7160f2123e5\") " pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" Dec 16 13:07:10.883127 kubelet[3177]: I1216 13:07:10.882779 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ea5faaca-0d4e-431d-9277-cb31c23101e9-goldmane-key-pair\") pod \"goldmane-7c778bb748-4jzfv\" (UID: \"ea5faaca-0d4e-431d-9277-cb31c23101e9\") " pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:10.883127 kubelet[3177]: I1216 13:07:10.882798 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a82fe54-15b6-44bf-9df4-aa8e33fe1999-calico-apiserver-certs\") pod \"calico-apiserver-5d6f84fc95-wd4m8\" (UID: \"7a82fe54-15b6-44bf-9df4-aa8e33fe1999\") " pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" Dec 16 13:07:10.883452 kubelet[3177]: I1216 13:07:10.883414 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5faaca-0d4e-431d-9277-cb31c23101e9-config\") pod \"goldmane-7c778bb748-4jzfv\" (UID: \"ea5faaca-0d4e-431d-9277-cb31c23101e9\") " pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:10.895871 systemd[1]: run-netns-cni\x2d7b1f49a3\x2d1bcb\x2da843\x2d3d28\x2d0fa26d8a91dc.mount: Deactivated successfully. Dec 16 13:07:10.905436 containerd[1711]: time="2025-12-16T13:07:10.905381953Z" level=error msg="Failed to destroy network for sandbox \"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.907023 systemd[1]: run-netns-cni\x2dfe74f667\x2dc044\x2d94de\x2d7520\x2d2bd638392170.mount: Deactivated successfully. Dec 16 13:07:10.911014 containerd[1711]: time="2025-12-16T13:07:10.910975751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844559678-xvg5z,Uid:8b80324f-a72e-4138-8ef4-af2e0235c136,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.911185 kubelet[3177]: E1216 13:07:10.911152 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:10.911645 kubelet[3177]: E1216 13:07:10.911201 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844559678-xvg5z" Dec 16 13:07:10.911645 kubelet[3177]: E1216 13:07:10.911220 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844559678-xvg5z" Dec 16 13:07:10.911645 kubelet[3177]: E1216 13:07:10.911270 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-844559678-xvg5z_calico-system(8b80324f-a72e-4138-8ef4-af2e0235c136)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-844559678-xvg5z_calico-system(8b80324f-a72e-4138-8ef4-af2e0235c136)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20ccbb21f9c87d1b3119a7084fc11da0be16fd962ae96be7d9a13b5fc0f96972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-844559678-xvg5z" podUID="8b80324f-a72e-4138-8ef4-af2e0235c136" Dec 16 13:07:11.027607 containerd[1711]: time="2025-12-16T13:07:11.027532544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:07:11.092263 containerd[1711]: time="2025-12-16T13:07:11.092160359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8sth5,Uid:317b8a1f-8f93-487a-a0c5-8114cd9eb845,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:11.109659 containerd[1711]: time="2025-12-16T13:07:11.109623745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdf94b9c-fggbk,Uid:9e221d7a-639b-4bcb-8508-8080960234ac,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:11.120746 containerd[1711]: time="2025-12-16T13:07:11.120702448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-wd4m8,Uid:7a82fe54-15b6-44bf-9df4-aa8e33fe1999,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:07:11.141903 containerd[1711]: time="2025-12-16T13:07:11.141875584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-w97rk,Uid:6a37c451-a2e0-4310-89e9-a7160f2123e5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:07:11.150339 containerd[1711]: time="2025-12-16T13:07:11.150233848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4jzfv,Uid:ea5faaca-0d4e-431d-9277-cb31c23101e9,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:11.161726 containerd[1711]: time="2025-12-16T13:07:11.161666359Z" level=error msg="Failed to destroy network for sandbox \"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.166935 containerd[1711]: time="2025-12-16T13:07:11.166865654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8sth5,Uid:317b8a1f-8f93-487a-a0c5-8114cd9eb845,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.167417 kubelet[3177]: E1216 13:07:11.167366 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.167529 kubelet[3177]: E1216 13:07:11.167437 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8sth5" Dec 16 13:07:11.167529 kubelet[3177]: E1216 13:07:11.167458 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8sth5" Dec 16 13:07:11.167529 kubelet[3177]: E1216 13:07:11.167511 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8sth5_kube-system(317b8a1f-8f93-487a-a0c5-8114cd9eb845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8sth5_kube-system(317b8a1f-8f93-487a-a0c5-8114cd9eb845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0785ee1228924ef6f95429f1d7d89893917d81ce703f8494f83d4e37c14387cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8sth5" podUID="317b8a1f-8f93-487a-a0c5-8114cd9eb845" Dec 16 13:07:11.214418 containerd[1711]: time="2025-12-16T13:07:11.214327272Z" level=error msg="Failed to destroy network for sandbox \"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.218610 containerd[1711]: time="2025-12-16T13:07:11.218564133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdf94b9c-fggbk,Uid:9e221d7a-639b-4bcb-8508-8080960234ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.220624 kubelet[3177]: E1216 13:07:11.220529 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.220624 kubelet[3177]: E1216 13:07:11.220586 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" Dec 16 13:07:11.220624 kubelet[3177]: E1216 13:07:11.220607 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" Dec 16 13:07:11.220769 kubelet[3177]: E1216 13:07:11.220668 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74bdf6ce7040e40624810bc1a3df01e021fe823aa407a746f30f51b360dd2e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:07:11.241602 containerd[1711]: time="2025-12-16T13:07:11.241507694Z" level=error msg="Failed to destroy network for sandbox \"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.244997 containerd[1711]: time="2025-12-16T13:07:11.244955385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-wd4m8,Uid:7a82fe54-15b6-44bf-9df4-aa8e33fe1999,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.245179 kubelet[3177]: E1216 13:07:11.245150 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.245231 kubelet[3177]: E1216 13:07:11.245199 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" Dec 16 13:07:11.245231 kubelet[3177]: E1216 13:07:11.245218 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" Dec 16 13:07:11.245291 kubelet[3177]: E1216 13:07:11.245267 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959c9f993f811d8e25f97f97dda3453ba3d5ef647544dccc6fef52c0d68ed71a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:11.260318 containerd[1711]: time="2025-12-16T13:07:11.259665537Z" level=error msg="Failed to destroy network for sandbox \"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.260433 containerd[1711]: time="2025-12-16T13:07:11.260293456Z" level=error msg="Failed to destroy network for sandbox \"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.263973 containerd[1711]: time="2025-12-16T13:07:11.263923123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4jzfv,Uid:ea5faaca-0d4e-431d-9277-cb31c23101e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.264782 kubelet[3177]: E1216 13:07:11.264087 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.264782 kubelet[3177]: E1216 13:07:11.264170 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:11.264782 kubelet[3177]: E1216 13:07:11.264187 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4jzfv" Dec 16 13:07:11.264893 kubelet[3177]: E1216 13:07:11.264236 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d5b5b8d41b6f0339944d00a20d56244f155439b343af9618e1cade237705ca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:11.267096 containerd[1711]: time="2025-12-16T13:07:11.267061474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-w97rk,Uid:6a37c451-a2e0-4310-89e9-a7160f2123e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.267481 kubelet[3177]: E1216 13:07:11.267454 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.267556 kubelet[3177]: E1216 13:07:11.267494 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" Dec 16 13:07:11.267556 kubelet[3177]: E1216 13:07:11.267515 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" Dec 16 13:07:11.267615 kubelet[3177]: E1216 13:07:11.267577 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08b8b5eab1db51c3a91e83b9858cad01fab9955ae89262a9693a89c2ec6e96f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:11.909455 systemd[1]: Created slice kubepods-besteffort-podccba9c4c_4f0e_4c2b_88e7_422574903af0.slice - libcontainer container kubepods-besteffort-podccba9c4c_4f0e_4c2b_88e7_422574903af0.slice. Dec 16 13:07:11.916922 containerd[1711]: time="2025-12-16T13:07:11.916884175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w7769,Uid:ccba9c4c-4f0e-4c2b-88e7-422574903af0,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:11.968296 containerd[1711]: time="2025-12-16T13:07:11.968249822Z" level=error msg="Failed to destroy network for sandbox \"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.970455 systemd[1]: run-netns-cni\x2d879bbf62\x2d4179\x2df781\x2d767c\x2da4e27c4e3098.mount: Deactivated successfully. Dec 16 13:07:11.973600 containerd[1711]: time="2025-12-16T13:07:11.973562278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w7769,Uid:ccba9c4c-4f0e-4c2b-88e7-422574903af0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.973795 kubelet[3177]: E1216 13:07:11.973753 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:07:11.974015 kubelet[3177]: E1216 13:07:11.973813 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:11.974015 kubelet[3177]: E1216 13:07:11.973833 3177 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w7769" Dec 16 13:07:11.974015 kubelet[3177]: E1216 13:07:11.973883 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232126f523bba7ecdcc5946422927687ad68feb3d243222a00a41770d2442aef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:15.525874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130503843.mount: Deactivated successfully. Dec 16 13:07:15.555206 containerd[1711]: time="2025-12-16T13:07:15.555159790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.558138 containerd[1711]: time="2025-12-16T13:07:15.558108057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:07:15.561970 containerd[1711]: time="2025-12-16T13:07:15.561927403Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.565468 containerd[1711]: time="2025-12-16T13:07:15.565439471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.565821 containerd[1711]: time="2025-12-16T13:07:15.565799726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.538227438s" Dec 16 13:07:15.565904 containerd[1711]: time="2025-12-16T13:07:15.565892092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:07:15.585320 containerd[1711]: time="2025-12-16T13:07:15.585290460Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:07:15.605328 containerd[1711]: time="2025-12-16T13:07:15.604527831Z" level=info msg="Container 32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:15.625181 containerd[1711]: time="2025-12-16T13:07:15.625152082Z" level=info msg="CreateContainer within sandbox \"d007c1b04b09d511ffaae5678a3fc2037040ba173875f51741fd44a2fa049f81\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22\"" Dec 16 13:07:15.625869 containerd[1711]: time="2025-12-16T13:07:15.625649419Z" level=info msg="StartContainer for \"32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22\"" Dec 16 13:07:15.627369 containerd[1711]: time="2025-12-16T13:07:15.627341225Z" level=info msg="connecting to shim 32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22" address="unix:///run/containerd/s/61fb43751e25d72729ab62f568516fa36da265bfc949da67e7245fa5a611d54f" protocol=ttrpc version=3 Dec 16 13:07:15.644573 systemd[1]: Started cri-containerd-32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22.scope - libcontainer container 32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22. Dec 16 13:07:15.718882 containerd[1711]: time="2025-12-16T13:07:15.718841603Z" level=info msg="StartContainer for \"32313158eb158c0aafc823a54e2a7db1ef37e2a8d7fa656ebc071361ea49eb22\" returns successfully" Dec 16 13:07:16.843780 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:07:16.843898 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:07:17.091112 kubelet[3177]: I1216 13:07:17.090093 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zt8g8" podStartSLOduration=2.849402735 podStartE2EDuration="18.090067872s" podCreationTimestamp="2025-12-16 13:06:59 +0000 UTC" firstStartedPulling="2025-12-16 13:07:00.325884034 +0000 UTC m=+20.519382234" lastFinishedPulling="2025-12-16 13:07:15.566549167 +0000 UTC m=+35.760047371" observedRunningTime="2025-12-16 13:07:16.067169144 +0000 UTC m=+36.260667350" watchObservedRunningTime="2025-12-16 13:07:17.090067872 +0000 UTC m=+37.283566099" Dec 16 13:07:17.223776 kubelet[3177]: I1216 13:07:17.223744 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-ca-bundle\") pod \"8b80324f-a72e-4138-8ef4-af2e0235c136\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " Dec 16 13:07:17.223776 kubelet[3177]: I1216 13:07:17.223783 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-backend-key-pair\") pod \"8b80324f-a72e-4138-8ef4-af2e0235c136\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " Dec 16 13:07:17.223960 kubelet[3177]: I1216 13:07:17.223805 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xwx4\" (UniqueName: \"kubernetes.io/projected/8b80324f-a72e-4138-8ef4-af2e0235c136-kube-api-access-2xwx4\") pod \"8b80324f-a72e-4138-8ef4-af2e0235c136\" (UID: \"8b80324f-a72e-4138-8ef4-af2e0235c136\") " Dec 16 13:07:17.225427 kubelet[3177]: I1216 13:07:17.224381 3177 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b80324f-a72e-4138-8ef4-af2e0235c136" (UID: "8b80324f-a72e-4138-8ef4-af2e0235c136"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:07:17.228864 systemd[1]: var-lib-kubelet-pods-8b80324f\x2da72e\x2d4138\x2d8ef4\x2daf2e0235c136-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xwx4.mount: Deactivated successfully. Dec 16 13:07:17.229483 kubelet[3177]: I1216 13:07:17.229174 3177 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b80324f-a72e-4138-8ef4-af2e0235c136-kube-api-access-2xwx4" (OuterVolumeSpecName: "kube-api-access-2xwx4") pod "8b80324f-a72e-4138-8ef4-af2e0235c136" (UID: "8b80324f-a72e-4138-8ef4-af2e0235c136"). InnerVolumeSpecName "kube-api-access-2xwx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:07:17.229631 kubelet[3177]: I1216 13:07:17.229611 3177 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b80324f-a72e-4138-8ef4-af2e0235c136" (UID: "8b80324f-a72e-4138-8ef4-af2e0235c136"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:07:17.232269 systemd[1]: var-lib-kubelet-pods-8b80324f\x2da72e\x2d4138\x2d8ef4\x2daf2e0235c136-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:07:17.324857 kubelet[3177]: I1216 13:07:17.324819 3177 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-ca-bundle\") on node \"ci-4459.2.2-a-22a3eae3ac\" DevicePath \"\"" Dec 16 13:07:17.325006 kubelet[3177]: I1216 13:07:17.324856 3177 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b80324f-a72e-4138-8ef4-af2e0235c136-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-22a3eae3ac\" DevicePath \"\"" Dec 16 13:07:17.325006 kubelet[3177]: I1216 13:07:17.324891 3177 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xwx4\" (UniqueName: \"kubernetes.io/projected/8b80324f-a72e-4138-8ef4-af2e0235c136-kube-api-access-2xwx4\") on node \"ci-4459.2.2-a-22a3eae3ac\" DevicePath \"\"" Dec 16 13:07:17.908995 systemd[1]: Removed slice kubepods-besteffort-pod8b80324f_a72e_4138_8ef4_af2e0235c136.slice - libcontainer container kubepods-besteffort-pod8b80324f_a72e_4138_8ef4_af2e0235c136.slice. Dec 16 13:07:18.147145 systemd[1]: Created slice kubepods-besteffort-pod9651eb18_927a_4296_81c4_78b2bf2e37f4.slice - libcontainer container kubepods-besteffort-pod9651eb18_927a_4296_81c4_78b2bf2e37f4.slice. Dec 16 13:07:18.230182 kubelet[3177]: I1216 13:07:18.230146 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9651eb18-927a-4296-81c4-78b2bf2e37f4-whisker-backend-key-pair\") pod \"whisker-6578d4d67b-2hqzh\" (UID: \"9651eb18-927a-4296-81c4-78b2bf2e37f4\") " pod="calico-system/whisker-6578d4d67b-2hqzh" Dec 16 13:07:18.230182 kubelet[3177]: I1216 13:07:18.230183 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gxbk\" (UniqueName: \"kubernetes.io/projected/9651eb18-927a-4296-81c4-78b2bf2e37f4-kube-api-access-9gxbk\") pod \"whisker-6578d4d67b-2hqzh\" (UID: \"9651eb18-927a-4296-81c4-78b2bf2e37f4\") " pod="calico-system/whisker-6578d4d67b-2hqzh" Dec 16 13:07:18.230182 kubelet[3177]: I1216 13:07:18.230217 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9651eb18-927a-4296-81c4-78b2bf2e37f4-whisker-ca-bundle\") pod \"whisker-6578d4d67b-2hqzh\" (UID: \"9651eb18-927a-4296-81c4-78b2bf2e37f4\") " pod="calico-system/whisker-6578d4d67b-2hqzh" Dec 16 13:07:18.457434 containerd[1711]: time="2025-12-16T13:07:18.457251396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578d4d67b-2hqzh,Uid:9651eb18-927a-4296-81c4-78b2bf2e37f4,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:18.613783 systemd-networkd[1337]: cali231889a78cc: Link UP Dec 16 13:07:18.615285 systemd-networkd[1337]: cali231889a78cc: Gained carrier Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.509 [INFO][4327] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.522 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0 whisker-6578d4d67b- calico-system 9651eb18-927a-4296-81c4-78b2bf2e37f4 908 0 2025-12-16 13:07:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6578d4d67b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac whisker-6578d4d67b-2hqzh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali231889a78cc [] [] }} ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.523 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.557 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" HandleID="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.557 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" HandleID="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"whisker-6578d4d67b-2hqzh", "timestamp":"2025-12-16 13:07:18.55779032 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.558 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.558 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.558 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.565 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.569 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.573 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.575 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.578 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.578 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.582 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0 Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.589 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.599 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.129/26] block=192.168.66.128/26 handle="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.599 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.129/26] handle="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.599 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:18.637994 containerd[1711]: 2025-12-16 13:07:18.599 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.129/26] IPv6=[] ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" HandleID="k8s-pod-network.72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.604 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0", GenerateName:"whisker-6578d4d67b-", Namespace:"calico-system", SelfLink:"", UID:"9651eb18-927a-4296-81c4-78b2bf2e37f4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6578d4d67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"whisker-6578d4d67b-2hqzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali231889a78cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.604 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.129/32] ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.605 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali231889a78cc ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.615 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.616 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0", GenerateName:"whisker-6578d4d67b-", Namespace:"calico-system", SelfLink:"", UID:"9651eb18-927a-4296-81c4-78b2bf2e37f4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6578d4d67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0", Pod:"whisker-6578d4d67b-2hqzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali231889a78cc", MAC:"de:8d:43:7c:a5:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:18.639056 containerd[1711]: 2025-12-16 13:07:18.634 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" Namespace="calico-system" Pod="whisker-6578d4d67b-2hqzh" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-whisker--6578d4d67b--2hqzh-eth0" Dec 16 13:07:18.689773 containerd[1711]: time="2025-12-16T13:07:18.688872260Z" level=info msg="connecting to shim 72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0" address="unix:///run/containerd/s/b7c457d60e210ab4bf1479bd2ee65681a91fe67efa711decdd2be7f2adce20a7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:18.719654 systemd[1]: Started cri-containerd-72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0.scope - libcontainer container 72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0. Dec 16 13:07:18.769140 containerd[1711]: time="2025-12-16T13:07:18.769062082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578d4d67b-2hqzh,Uid:9651eb18-927a-4296-81c4-78b2bf2e37f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"72b439720779254ec305ac3f44e7b66a704f10af2db4925c48a3f9bc81475ea0\"" Dec 16 13:07:18.770530 containerd[1711]: time="2025-12-16T13:07:18.770506876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:19.137241 containerd[1711]: time="2025-12-16T13:07:19.137195870Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:19.140431 containerd[1711]: time="2025-12-16T13:07:19.140380716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:19.141203 containerd[1711]: time="2025-12-16T13:07:19.140410231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:07:19.141269 kubelet[3177]: E1216 13:07:19.140610 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:19.141269 kubelet[3177]: E1216 13:07:19.140661 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:19.141269 kubelet[3177]: E1216 13:07:19.140740 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:19.143064 containerd[1711]: time="2025-12-16T13:07:19.143035057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:19.519442 containerd[1711]: time="2025-12-16T13:07:19.519373610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:19.522357 containerd[1711]: time="2025-12-16T13:07:19.522304047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:19.522466 containerd[1711]: time="2025-12-16T13:07:19.522319836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:19.522622 kubelet[3177]: E1216 13:07:19.522580 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:19.522882 kubelet[3177]: E1216 13:07:19.522625 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:19.522882 kubelet[3177]: E1216 13:07:19.522743 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:19.522882 kubelet[3177]: E1216 13:07:19.522789 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:07:19.905791 kubelet[3177]: I1216 13:07:19.905694 3177 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b80324f-a72e-4138-8ef4-af2e0235c136" path="/var/lib/kubelet/pods/8b80324f-a72e-4138-8ef4-af2e0235c136/volumes" Dec 16 13:07:19.966535 systemd-networkd[1337]: cali231889a78cc: Gained IPv6LL Dec 16 13:07:20.049891 kubelet[3177]: E1216 13:07:20.049835 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:07:21.914581 containerd[1711]: time="2025-12-16T13:07:21.914481497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4jzfv,Uid:ea5faaca-0d4e-431d-9277-cb31c23101e9,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:22.004317 systemd-networkd[1337]: calia688f572429: Link UP Dec 16 13:07:22.005905 systemd-networkd[1337]: calia688f572429: Gained carrier Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.944 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.954 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0 goldmane-7c778bb748- calico-system ea5faaca-0d4e-431d-9277-cb31c23101e9 843 0 2025-12-16 13:06:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac goldmane-7c778bb748-4jzfv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia688f572429 [] [] }} ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.954 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.972 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" HandleID="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.973 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" HandleID="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"goldmane-7c778bb748-4jzfv", "timestamp":"2025-12-16 13:07:21.972859867 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.973 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.973 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.973 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.977 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.980 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.983 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.984 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.986 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.986 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.987 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02 Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:21.994 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:22.001 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.130/26] block=192.168.66.128/26 handle="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:22.001 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.130/26] handle="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:22.001 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:22.022322 containerd[1711]: 2025-12-16 13:07:22.001 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.130/26] IPv6=[] ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" HandleID="k8s-pod-network.b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.002 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ea5faaca-0d4e-431d-9277-cb31c23101e9", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"goldmane-7c778bb748-4jzfv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia688f572429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.003 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.130/32] ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.003 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia688f572429 ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.006 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.007 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ea5faaca-0d4e-431d-9277-cb31c23101e9", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02", Pod:"goldmane-7c778bb748-4jzfv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia688f572429", MAC:"22:6e:fa:c7:9d:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:22.022795 containerd[1711]: 2025-12-16 13:07:22.019 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" Namespace="calico-system" Pod="goldmane-7c778bb748-4jzfv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-goldmane--7c778bb748--4jzfv-eth0" Dec 16 13:07:22.072028 containerd[1711]: time="2025-12-16T13:07:22.071468194Z" level=info msg="connecting to shim b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02" address="unix:///run/containerd/s/c7230332fe2de5bb612703ecb531f34b18e18952658512ed566a281b8ae0f83e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:22.093547 systemd[1]: Started cri-containerd-b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02.scope - libcontainer container b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02. Dec 16 13:07:22.133616 containerd[1711]: time="2025-12-16T13:07:22.133583224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4jzfv,Uid:ea5faaca-0d4e-431d-9277-cb31c23101e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5c481254f3e03a87ca14aa66f0692581400609a7ed690415371ef425cbccb02\"" Dec 16 13:07:22.135048 containerd[1711]: time="2025-12-16T13:07:22.135023806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:22.489056 kubelet[3177]: I1216 13:07:22.488863 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:07:22.536165 containerd[1711]: time="2025-12-16T13:07:22.536124887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:22.539337 containerd[1711]: time="2025-12-16T13:07:22.539291150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:22.539459 containerd[1711]: time="2025-12-16T13:07:22.539380828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:22.539654 kubelet[3177]: E1216 13:07:22.539626 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:22.539707 kubelet[3177]: E1216 13:07:22.539663 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:22.539794 kubelet[3177]: E1216 13:07:22.539779 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:22.539827 kubelet[3177]: E1216 13:07:22.539811 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:22.915596 containerd[1711]: time="2025-12-16T13:07:22.915374374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdf94b9c-fggbk,Uid:9e221d7a-639b-4bcb-8508-8080960234ac,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:22.932620 containerd[1711]: time="2025-12-16T13:07:22.932582713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w7769,Uid:ccba9c4c-4f0e-4c2b-88e7-422574903af0,Namespace:calico-system,Attempt:0,}" Dec 16 13:07:23.058851 kubelet[3177]: E1216 13:07:23.058810 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:23.150635 systemd-networkd[1337]: cali26d2d214205: Link UP Dec 16 13:07:23.150843 systemd-networkd[1337]: cali26d2d214205: Gained carrier Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.030 [INFO][4576] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0 csi-node-driver- calico-system ccba9c4c-4f0e-4c2b-88e7-422574903af0 729 0 2025-12-16 13:07:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac csi-node-driver-w7769 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali26d2d214205 [] [] }} ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.031 [INFO][4576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.099 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" HandleID="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.100 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" HandleID="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"csi-node-driver-w7769", "timestamp":"2025-12-16 13:07:23.09931526 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.100 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.100 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.100 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.112 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.116 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.120 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.123 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.125 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.125 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.126 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770 Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.133 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.140 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.131/26] block=192.168.66.128/26 handle="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.141 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.131/26] handle="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.141 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:23.183597 containerd[1711]: 2025-12-16 13:07:23.141 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.131/26] IPv6=[] ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" HandleID="k8s-pod-network.c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.143 [INFO][4576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ccba9c4c-4f0e-4c2b-88e7-422574903af0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"csi-node-driver-w7769", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26d2d214205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.144 [INFO][4576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.131/32] ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.144 [INFO][4576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26d2d214205 ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.155 [INFO][4576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.157 [INFO][4576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ccba9c4c-4f0e-4c2b-88e7-422574903af0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770", Pod:"csi-node-driver-w7769", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26d2d214205", MAC:"42:23:9f:0a:93:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:23.184174 containerd[1711]: 2025-12-16 13:07:23.179 [INFO][4576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" Namespace="calico-system" Pod="csi-node-driver-w7769" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-csi--node--driver--w7769-eth0" Dec 16 13:07:23.243537 containerd[1711]: time="2025-12-16T13:07:23.243488643Z" level=info msg="connecting to shim c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770" address="unix:///run/containerd/s/99a00c1662f8ade4ae221386347094ddef67b5773f47c226c9d2e1c352c8a8cd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:23.270698 systemd-networkd[1337]: cali7c518573bd1: Link UP Dec 16 13:07:23.271340 systemd-networkd[1337]: cali7c518573bd1: Gained carrier Dec 16 13:07:23.277555 systemd[1]: Started cri-containerd-c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770.scope - libcontainer container c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770. Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.035 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0 calico-kube-controllers-66fdf94b9c- calico-system 9e221d7a-639b-4bcb-8508-8080960234ac 839 0 2025-12-16 13:07:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66fdf94b9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac calico-kube-controllers-66fdf94b9c-fggbk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c518573bd1 [] [] }} ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.035 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.104 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" HandleID="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.106 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" HandleID="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"calico-kube-controllers-66fdf94b9c-fggbk", "timestamp":"2025-12-16 13:07:23.104593026 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.106 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.141 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.141 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.217 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.223 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.230 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.233 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.238 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.239 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.244 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61 Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.249 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.259 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.132/26] block=192.168.66.128/26 handle="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.259 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.132/26] handle="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.259 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:23.296633 containerd[1711]: 2025-12-16 13:07:23.259 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.132/26] IPv6=[] ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" HandleID="k8s-pod-network.9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.264 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0", GenerateName:"calico-kube-controllers-66fdf94b9c-", Namespace:"calico-system", SelfLink:"", UID:"9e221d7a-639b-4bcb-8508-8080960234ac", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdf94b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"calico-kube-controllers-66fdf94b9c-fggbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c518573bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.264 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.132/32] ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.264 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c518573bd1 ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.273 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.273 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0", GenerateName:"calico-kube-controllers-66fdf94b9c-", Namespace:"calico-system", SelfLink:"", UID:"9e221d7a-639b-4bcb-8508-8080960234ac", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdf94b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61", Pod:"calico-kube-controllers-66fdf94b9c-fggbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c518573bd1", MAC:"ce:2e:ff:f9:f1:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:23.297755 containerd[1711]: 2025-12-16 13:07:23.291 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" Namespace="calico-system" Pod="calico-kube-controllers-66fdf94b9c-fggbk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--kube--controllers--66fdf94b9c--fggbk-eth0" Dec 16 13:07:23.336620 containerd[1711]: time="2025-12-16T13:07:23.336590106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w7769,Uid:ccba9c4c-4f0e-4c2b-88e7-422574903af0,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3758eedee708ce9e2c84146254c0814af20518c9529991efd5d8fe9b1f7d770\"" Dec 16 13:07:23.340888 containerd[1711]: time="2025-12-16T13:07:23.340806072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:23.355337 containerd[1711]: time="2025-12-16T13:07:23.355284782Z" level=info msg="connecting to shim 9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61" address="unix:///run/containerd/s/d96e94c9627f6d56bf7bd83260bff355ce9ad2aa00a4df564e2dd15ce7cc54b9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:23.380554 systemd[1]: Started cri-containerd-9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61.scope - libcontainer container 9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61. Dec 16 13:07:23.382945 systemd-networkd[1337]: vxlan.calico: Link UP Dec 16 13:07:23.383030 systemd-networkd[1337]: vxlan.calico: Gained carrier Dec 16 13:07:23.457427 containerd[1711]: time="2025-12-16T13:07:23.455153085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdf94b9c-fggbk,Uid:9e221d7a-639b-4bcb-8508-8080960234ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"9934a908a704db41270766added70586493f54cd974a4cd992094de3120cfc61\"" Dec 16 13:07:23.550777 systemd-networkd[1337]: calia688f572429: Gained IPv6LL Dec 16 13:07:23.704644 containerd[1711]: time="2025-12-16T13:07:23.704603154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:23.708506 containerd[1711]: time="2025-12-16T13:07:23.708360919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:23.708506 containerd[1711]: time="2025-12-16T13:07:23.708484248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:07:23.709094 containerd[1711]: time="2025-12-16T13:07:23.708988277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:23.709123 kubelet[3177]: E1216 13:07:23.708670 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:23.709123 kubelet[3177]: E1216 13:07:23.708711 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:23.709409 kubelet[3177]: E1216 13:07:23.709219 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:23.915012 containerd[1711]: time="2025-12-16T13:07:23.914961284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8sth5,Uid:317b8a1f-8f93-487a-a0c5-8114cd9eb845,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:24.014454 systemd-networkd[1337]: cali8c4664a7d12: Link UP Dec 16 13:07:24.014660 systemd-networkd[1337]: cali8c4664a7d12: Gained carrier Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.957 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0 coredns-66bc5c9577- kube-system 317b8a1f-8f93-487a-a0c5-8114cd9eb845 838 0 2025-12-16 13:06:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac coredns-66bc5c9577-8sth5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c4664a7d12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.957 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.978 [INFO][4839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" HandleID="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.978 [INFO][4839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" HandleID="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"coredns-66bc5c9577-8sth5", "timestamp":"2025-12-16 13:07:23.978538747 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.978 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.978 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.978 [INFO][4839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.983 [INFO][4839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.987 [INFO][4839] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.990 [INFO][4839] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.992 [INFO][4839] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.993 [INFO][4839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.993 [INFO][4839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:23.995 [INFO][4839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7 Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:24.003 [INFO][4839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:24.008 [INFO][4839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.133/26] block=192.168.66.128/26 handle="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:24.008 [INFO][4839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.133/26] handle="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:24.008 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:24.031725 containerd[1711]: 2025-12-16 13:07:24.008 [INFO][4839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.133/26] IPv6=[] ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" HandleID="k8s-pod-network.a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.033513 containerd[1711]: 2025-12-16 13:07:24.010 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"317b8a1f-8f93-487a-a0c5-8114cd9eb845", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"coredns-66bc5c9577-8sth5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c4664a7d12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:24.033513 containerd[1711]: 2025-12-16 13:07:24.010 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.133/32] ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.033513 containerd[1711]: 2025-12-16 13:07:24.010 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c4664a7d12 ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.033513 containerd[1711]: 2025-12-16 13:07:24.016 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.033678 containerd[1711]: 2025-12-16 13:07:24.018 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"317b8a1f-8f93-487a-a0c5-8114cd9eb845", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7", Pod:"coredns-66bc5c9577-8sth5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c4664a7d12", MAC:"6a:b5:3a:a7:d1:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:24.033678 containerd[1711]: 2025-12-16 13:07:24.029 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" Namespace="kube-system" Pod="coredns-66bc5c9577-8sth5" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--8sth5-eth0" Dec 16 13:07:24.059528 kubelet[3177]: E1216 13:07:24.059491 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:24.067901 containerd[1711]: time="2025-12-16T13:07:24.067863903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:24.075915 containerd[1711]: time="2025-12-16T13:07:24.075873198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:24.075986 containerd[1711]: time="2025-12-16T13:07:24.075956244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:24.076099 kubelet[3177]: E1216 13:07:24.076071 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:24.076148 kubelet[3177]: E1216 13:07:24.076106 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:24.076261 kubelet[3177]: E1216 13:07:24.076238 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:24.076370 kubelet[3177]: E1216 13:07:24.076276 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:07:24.076567 containerd[1711]: time="2025-12-16T13:07:24.076530497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:24.093991 containerd[1711]: time="2025-12-16T13:07:24.093956719Z" level=info msg="connecting to shim a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7" address="unix:///run/containerd/s/59443808cfcb812bba1a71b14a5b2760e05e6c271a26b7ab51c9bdaac2093c2f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:24.115535 systemd[1]: Started cri-containerd-a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7.scope - libcontainer container a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7. Dec 16 13:07:24.156735 containerd[1711]: time="2025-12-16T13:07:24.156708257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8sth5,Uid:317b8a1f-8f93-487a-a0c5-8114cd9eb845,Namespace:kube-system,Attempt:0,} returns sandbox id \"a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7\"" Dec 16 13:07:24.164932 containerd[1711]: time="2025-12-16T13:07:24.164907433Z" level=info msg="CreateContainer within sandbox \"a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:07:24.187423 containerd[1711]: time="2025-12-16T13:07:24.186709824Z" level=info msg="Container 5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:24.202601 containerd[1711]: time="2025-12-16T13:07:24.202569536Z" level=info msg="CreateContainer within sandbox \"a407de97e2fe576d3646cdef16d582ee8803eb902f83472b8b93f69bebe2dfb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc\"" Dec 16 13:07:24.203566 containerd[1711]: time="2025-12-16T13:07:24.203540805Z" level=info msg="StartContainer for \"5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc\"" Dec 16 13:07:24.204456 containerd[1711]: time="2025-12-16T13:07:24.204423817Z" level=info msg="connecting to shim 5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc" address="unix:///run/containerd/s/59443808cfcb812bba1a71b14a5b2760e05e6c271a26b7ab51c9bdaac2093c2f" protocol=ttrpc version=3 Dec 16 13:07:24.222535 systemd[1]: Started cri-containerd-5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc.scope - libcontainer container 5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc. Dec 16 13:07:24.246785 containerd[1711]: time="2025-12-16T13:07:24.246764769Z" level=info msg="StartContainer for \"5eff450903298649b25cb971af71097eccc1544854b36a1f3ae150eb60329cdc\" returns successfully" Dec 16 13:07:24.450382 containerd[1711]: time="2025-12-16T13:07:24.450341345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:24.457413 containerd[1711]: time="2025-12-16T13:07:24.457362967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:24.457539 containerd[1711]: time="2025-12-16T13:07:24.457460494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:07:24.458192 kubelet[3177]: E1216 13:07:24.458148 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:24.458250 kubelet[3177]: E1216 13:07:24.458203 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:24.458295 kubelet[3177]: E1216 13:07:24.458276 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:24.458374 kubelet[3177]: E1216 13:07:24.458321 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:24.702537 systemd-networkd[1337]: cali7c518573bd1: Gained IPv6LL Dec 16 13:07:24.910034 containerd[1711]: time="2025-12-16T13:07:24.909982266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-w97rk,Uid:6a37c451-a2e0-4310-89e9-a7160f2123e5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:07:25.003590 systemd-networkd[1337]: cali835c4952106: Link UP Dec 16 13:07:25.004593 systemd-networkd[1337]: cali835c4952106: Gained carrier Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.951 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0 calico-apiserver-5d6f84fc95- calico-apiserver 6a37c451-a2e0-4310-89e9-a7160f2123e5 842 0 2025-12-16 13:06:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d6f84fc95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac calico-apiserver-5d6f84fc95-w97rk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali835c4952106 [] [] }} ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.951 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.972 [INFO][4945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" HandleID="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.972 [INFO][4945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" HandleID="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"calico-apiserver-5d6f84fc95-w97rk", "timestamp":"2025-12-16 13:07:24.972864576 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.973 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.973 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.973 [INFO][4945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.977 [INFO][4945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.980 [INFO][4945] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.983 [INFO][4945] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.984 [INFO][4945] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.986 [INFO][4945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.986 [INFO][4945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.987 [INFO][4945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.993 [INFO][4945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.999 [INFO][4945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.134/26] block=192.168.66.128/26 handle="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.999 [INFO][4945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.134/26] handle="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.999 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:25.019677 containerd[1711]: 2025-12-16 13:07:24.999 [INFO][4945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.134/26] IPv6=[] ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" HandleID="k8s-pod-network.b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.000 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0", GenerateName:"calico-apiserver-5d6f84fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a37c451-a2e0-4310-89e9-a7160f2123e5", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d6f84fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"calico-apiserver-5d6f84fc95-w97rk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali835c4952106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.000 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.134/32] ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.000 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali835c4952106 ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.005 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.005 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0", GenerateName:"calico-apiserver-5d6f84fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a37c451-a2e0-4310-89e9-a7160f2123e5", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d6f84fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f", Pod:"calico-apiserver-5d6f84fc95-w97rk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali835c4952106", MAC:"c2:2e:5a:61:ad:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:25.020769 containerd[1711]: 2025-12-16 13:07:25.016 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-w97rk" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--w97rk-eth0" Dec 16 13:07:25.063033 kubelet[3177]: E1216 13:07:25.062991 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:07:25.065106 kubelet[3177]: E1216 13:07:25.065049 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:25.077726 containerd[1711]: time="2025-12-16T13:07:25.077447117Z" level=info msg="connecting to shim b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f" address="unix:///run/containerd/s/ba2c538b480ceb51804b353c649d772d357cfd38f73f90f751f6bd50d92b6ef9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:25.086552 systemd-networkd[1337]: cali26d2d214205: Gained IPv6LL Dec 16 13:07:25.086767 systemd-networkd[1337]: vxlan.calico: Gained IPv6LL Dec 16 13:07:25.106693 systemd[1]: Started cri-containerd-b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f.scope - libcontainer container b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f. Dec 16 13:07:25.142952 kubelet[3177]: I1216 13:07:25.142801 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8sth5" podStartSLOduration=40.142787897 podStartE2EDuration="40.142787897s" podCreationTimestamp="2025-12-16 13:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:25.14268783 +0000 UTC m=+45.336186039" watchObservedRunningTime="2025-12-16 13:07:25.142787897 +0000 UTC m=+45.336286102" Dec 16 13:07:25.186199 containerd[1711]: time="2025-12-16T13:07:25.186176418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-w97rk,Uid:6a37c451-a2e0-4310-89e9-a7160f2123e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8f76ec4477f19ef6ffa60d32fd9673cff733b0d47328fde64b263c720f5b49f\"" Dec 16 13:07:25.187635 containerd[1711]: time="2025-12-16T13:07:25.187611190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:25.406558 systemd-networkd[1337]: cali8c4664a7d12: Gained IPv6LL Dec 16 13:07:25.554510 containerd[1711]: time="2025-12-16T13:07:25.554472009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:25.557633 containerd[1711]: time="2025-12-16T13:07:25.557609828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:25.557713 containerd[1711]: time="2025-12-16T13:07:25.557688684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:25.557867 kubelet[3177]: E1216 13:07:25.557832 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:25.557928 kubelet[3177]: E1216 13:07:25.557877 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:25.557976 kubelet[3177]: E1216 13:07:25.557958 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:25.558022 kubelet[3177]: E1216 13:07:25.557998 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:25.911590 containerd[1711]: time="2025-12-16T13:07:25.911546969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xg8cv,Uid:8ace8d57-2637-433f-b5cf-8ad4a3667131,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:25.915806 containerd[1711]: time="2025-12-16T13:07:25.915737207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-wd4m8,Uid:7a82fe54-15b6-44bf-9df4-aa8e33fe1999,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:07:26.034884 systemd-networkd[1337]: cali1e6c21f8445: Link UP Dec 16 13:07:26.038973 systemd-networkd[1337]: cali1e6c21f8445: Gained carrier Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.967 [INFO][5008] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0 coredns-66bc5c9577- kube-system 8ace8d57-2637-433f-b5cf-8ad4a3667131 835 0 2025-12-16 13:06:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac coredns-66bc5c9577-xg8cv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e6c21f8445 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.968 [INFO][5008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.998 [INFO][5031] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" HandleID="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.998 [INFO][5031] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" HandleID="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"coredns-66bc5c9577-xg8cv", "timestamp":"2025-12-16 13:07:25.998050917 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.998 [INFO][5031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.998 [INFO][5031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:25.998 [INFO][5031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.004 [INFO][5031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.007 [INFO][5031] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.010 [INFO][5031] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.012 [INFO][5031] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.013 [INFO][5031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.014 [INFO][5031] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.015 [INFO][5031] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7 Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.019 [INFO][5031] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5031] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.135/26] block=192.168.66.128/26 handle="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.135/26] handle="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:26.056338 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5031] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.135/26] IPv6=[] ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" HandleID="k8s-pod-network.86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.057841 containerd[1711]: 2025-12-16 13:07:26.031 [INFO][5008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ace8d57-2637-433f-b5cf-8ad4a3667131", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"coredns-66bc5c9577-xg8cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6c21f8445", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:26.057841 containerd[1711]: 2025-12-16 13:07:26.032 [INFO][5008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.135/32] ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.057841 containerd[1711]: 2025-12-16 13:07:26.032 [INFO][5008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e6c21f8445 ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.057841 containerd[1711]: 2025-12-16 13:07:26.037 [INFO][5008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.058440 containerd[1711]: 2025-12-16 13:07:26.038 [INFO][5008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ace8d57-2637-433f-b5cf-8ad4a3667131", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7", Pod:"coredns-66bc5c9577-xg8cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6c21f8445", MAC:"ce:c1:1a:9b:06:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:26.058440 containerd[1711]: 2025-12-16 13:07:26.054 [INFO][5008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" Namespace="kube-system" Pod="coredns-66bc5c9577-xg8cv" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-coredns--66bc5c9577--xg8cv-eth0" Dec 16 13:07:26.067314 kubelet[3177]: E1216 13:07:26.067277 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:26.112072 containerd[1711]: time="2025-12-16T13:07:26.111995097Z" level=info msg="connecting to shim 86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7" address="unix:///run/containerd/s/3a647553d2fbffa4bfe2c0f21bdf55705f0081d12313d5714b15c8b478c351ab" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:26.139558 systemd[1]: Started cri-containerd-86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7.scope - libcontainer container 86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7. Dec 16 13:07:26.159666 systemd-networkd[1337]: calia15fb6cbeca: Link UP Dec 16 13:07:26.160750 systemd-networkd[1337]: calia15fb6cbeca: Gained carrier Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:25.974 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0 calico-apiserver-5d6f84fc95- calico-apiserver 7a82fe54-15b6-44bf-9df4-aa8e33fe1999 840 0 2025-12-16 13:06:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d6f84fc95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-22a3eae3ac calico-apiserver-5d6f84fc95-wd4m8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia15fb6cbeca [] [] }} ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:25.974 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.002 [INFO][5036] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" HandleID="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.003 [INFO][5036] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" HandleID="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-22a3eae3ac", "pod":"calico-apiserver-5d6f84fc95-wd4m8", "timestamp":"2025-12-16 13:07:26.002943268 +0000 UTC"}, Hostname:"ci-4459.2.2-a-22a3eae3ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.003 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.029 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-22a3eae3ac' Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.109 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.120 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.126 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.130 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.133 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.133 [INFO][5036] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.136 [INFO][5036] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.143 [INFO][5036] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.154 [INFO][5036] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.136/26] block=192.168.66.128/26 handle="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.154 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.136/26] handle="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" host="ci-4459.2.2-a-22a3eae3ac" Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.154 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:07:26.194497 containerd[1711]: 2025-12-16 13:07:26.154 [INFO][5036] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.136/26] IPv6=[] ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" HandleID="k8s-pod-network.821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Workload="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.157 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0", GenerateName:"calico-apiserver-5d6f84fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a82fe54-15b6-44bf-9df4-aa8e33fe1999", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d6f84fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"", Pod:"calico-apiserver-5d6f84fc95-wd4m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia15fb6cbeca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.157 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.136/32] ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.157 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia15fb6cbeca ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.164 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.168 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0", GenerateName:"calico-apiserver-5d6f84fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a82fe54-15b6-44bf-9df4-aa8e33fe1999", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d6f84fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-22a3eae3ac", ContainerID:"821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d", Pod:"calico-apiserver-5d6f84fc95-wd4m8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia15fb6cbeca", MAC:"de:d1:e1:31:ad:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:07:26.196099 containerd[1711]: 2025-12-16 13:07:26.190 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" Namespace="calico-apiserver" Pod="calico-apiserver-5d6f84fc95-wd4m8" WorkloadEndpoint="ci--4459.2.2--a--22a3eae3ac-k8s-calico--apiserver--5d6f84fc95--wd4m8-eth0" Dec 16 13:07:26.215795 containerd[1711]: time="2025-12-16T13:07:26.215758190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xg8cv,Uid:8ace8d57-2637-433f-b5cf-8ad4a3667131,Namespace:kube-system,Attempt:0,} returns sandbox id \"86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7\"" Dec 16 13:07:26.227730 containerd[1711]: time="2025-12-16T13:07:26.227379505Z" level=info msg="CreateContainer within sandbox \"86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:07:26.261616 containerd[1711]: time="2025-12-16T13:07:26.261579711Z" level=info msg="connecting to shim 821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d" address="unix:///run/containerd/s/92b08c0e9fa0012081934c29d337eaa7a8ce247f01364884e0e8edcc36da5a8c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:26.270363 containerd[1711]: time="2025-12-16T13:07:26.267735457Z" level=info msg="Container 50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:26.270350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882566186.mount: Deactivated successfully. Dec 16 13:07:26.288143 containerd[1711]: time="2025-12-16T13:07:26.287538208Z" level=info msg="CreateContainer within sandbox \"86559757ba35224fa1c5a271033d2bb334d8e895183b7e4af2a25763174d79f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f\"" Dec 16 13:07:26.292660 containerd[1711]: time="2025-12-16T13:07:26.292636568Z" level=info msg="StartContainer for \"50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f\"" Dec 16 13:07:26.293421 containerd[1711]: time="2025-12-16T13:07:26.293349289Z" level=info msg="connecting to shim 50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f" address="unix:///run/containerd/s/3a647553d2fbffa4bfe2c0f21bdf55705f0081d12313d5714b15c8b478c351ab" protocol=ttrpc version=3 Dec 16 13:07:26.295684 systemd[1]: Started cri-containerd-821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d.scope - libcontainer container 821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d. Dec 16 13:07:26.317550 systemd[1]: Started cri-containerd-50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f.scope - libcontainer container 50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f. Dec 16 13:07:26.363622 containerd[1711]: time="2025-12-16T13:07:26.363600534Z" level=info msg="StartContainer for \"50b7b150e93a4ef46208aebe016bcfa63b21c9900c57a8baf3570382f153534f\" returns successfully" Dec 16 13:07:26.422513 containerd[1711]: time="2025-12-16T13:07:26.422481114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d6f84fc95-wd4m8,Uid:7a82fe54-15b6-44bf-9df4-aa8e33fe1999,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"821cf27aef5b2ca457ddcebf13876195ba77a5b712d71069d9370205444ac71d\"" Dec 16 13:07:26.427983 containerd[1711]: time="2025-12-16T13:07:26.427783214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:26.798138 containerd[1711]: time="2025-12-16T13:07:26.798083958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:26.802311 containerd[1711]: time="2025-12-16T13:07:26.802156535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:26.802311 containerd[1711]: time="2025-12-16T13:07:26.802279703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:26.802821 kubelet[3177]: E1216 13:07:26.802753 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:26.802979 kubelet[3177]: E1216 13:07:26.802901 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:26.803324 kubelet[3177]: E1216 13:07:26.803229 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:26.803324 kubelet[3177]: E1216 13:07:26.803279 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:26.942954 systemd-networkd[1337]: cali835c4952106: Gained IPv6LL Dec 16 13:07:27.069359 kubelet[3177]: E1216 13:07:27.068851 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:27.073245 kubelet[3177]: E1216 13:07:27.073217 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:27.120578 kubelet[3177]: I1216 13:07:27.120524 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xg8cv" podStartSLOduration=42.120508115 podStartE2EDuration="42.120508115s" podCreationTimestamp="2025-12-16 13:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:27.11932193 +0000 UTC m=+47.312820137" watchObservedRunningTime="2025-12-16 13:07:27.120508115 +0000 UTC m=+47.314006320" Dec 16 13:07:27.582956 systemd-networkd[1337]: calia15fb6cbeca: Gained IPv6LL Dec 16 13:07:28.030607 systemd-networkd[1337]: cali1e6c21f8445: Gained IPv6LL Dec 16 13:07:28.073862 kubelet[3177]: E1216 13:07:28.073827 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:33.909333 containerd[1711]: time="2025-12-16T13:07:33.907597772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:34.297410 containerd[1711]: time="2025-12-16T13:07:34.297353542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:34.300313 containerd[1711]: time="2025-12-16T13:07:34.300285991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:34.300433 containerd[1711]: time="2025-12-16T13:07:34.300362796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:07:34.300547 kubelet[3177]: E1216 13:07:34.300514 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:34.300827 kubelet[3177]: E1216 13:07:34.300560 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:34.300827 kubelet[3177]: E1216 13:07:34.300639 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:34.301859 containerd[1711]: time="2025-12-16T13:07:34.301818734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:34.685406 containerd[1711]: time="2025-12-16T13:07:34.685357633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:34.689072 containerd[1711]: time="2025-12-16T13:07:34.689041894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:34.689152 containerd[1711]: time="2025-12-16T13:07:34.689126684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:34.689336 kubelet[3177]: E1216 13:07:34.689302 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:34.689390 kubelet[3177]: E1216 13:07:34.689349 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:34.689472 kubelet[3177]: E1216 13:07:34.689443 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:34.689544 kubelet[3177]: E1216 13:07:34.689501 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:07:36.905932 containerd[1711]: time="2025-12-16T13:07:36.905881917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:37.283938 containerd[1711]: time="2025-12-16T13:07:37.283887445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:37.287188 containerd[1711]: time="2025-12-16T13:07:37.287079459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:37.287188 containerd[1711]: time="2025-12-16T13:07:37.287108333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:07:37.287381 kubelet[3177]: E1216 13:07:37.287347 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:37.287735 kubelet[3177]: E1216 13:07:37.287410 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:37.287735 kubelet[3177]: E1216 13:07:37.287487 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:37.288727 containerd[1711]: time="2025-12-16T13:07:37.288653398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:37.642289 containerd[1711]: time="2025-12-16T13:07:37.642160838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:37.645193 containerd[1711]: time="2025-12-16T13:07:37.645150432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:37.645409 containerd[1711]: time="2025-12-16T13:07:37.645184015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:07:37.645597 kubelet[3177]: E1216 13:07:37.645562 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:37.645658 kubelet[3177]: E1216 13:07:37.645612 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:37.645748 kubelet[3177]: E1216 13:07:37.645729 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:37.646155 kubelet[3177]: E1216 13:07:37.646126 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:37.907598 containerd[1711]: time="2025-12-16T13:07:37.906833480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:38.274552 containerd[1711]: time="2025-12-16T13:07:38.274509014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:38.277825 containerd[1711]: time="2025-12-16T13:07:38.277763059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:38.277825 containerd[1711]: time="2025-12-16T13:07:38.277805230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:38.278028 kubelet[3177]: E1216 13:07:38.277994 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:38.278096 kubelet[3177]: E1216 13:07:38.278041 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:38.278162 kubelet[3177]: E1216 13:07:38.278128 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:38.278203 kubelet[3177]: E1216 13:07:38.278181 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:38.906005 containerd[1711]: time="2025-12-16T13:07:38.905613826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:39.259134 containerd[1711]: time="2025-12-16T13:07:39.259088692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:39.261901 containerd[1711]: time="2025-12-16T13:07:39.261869509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:39.261968 containerd[1711]: time="2025-12-16T13:07:39.261948907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:39.262185 kubelet[3177]: E1216 13:07:39.262150 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:39.262462 kubelet[3177]: E1216 13:07:39.262198 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:39.262462 kubelet[3177]: E1216 13:07:39.262283 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:39.262462 kubelet[3177]: E1216 13:07:39.262315 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:39.909355 containerd[1711]: time="2025-12-16T13:07:39.909277877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:40.266008 containerd[1711]: time="2025-12-16T13:07:40.265945841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:40.269280 containerd[1711]: time="2025-12-16T13:07:40.269242732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:40.269377 containerd[1711]: time="2025-12-16T13:07:40.269261398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:07:40.269536 kubelet[3177]: E1216 13:07:40.269501 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:40.269774 kubelet[3177]: E1216 13:07:40.269547 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:40.269774 kubelet[3177]: E1216 13:07:40.269649 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:40.270074 kubelet[3177]: E1216 13:07:40.270039 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:07:41.905352 containerd[1711]: time="2025-12-16T13:07:41.905260185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:42.283226 containerd[1711]: time="2025-12-16T13:07:42.283182105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:42.287585 containerd[1711]: time="2025-12-16T13:07:42.287555569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:42.287665 containerd[1711]: time="2025-12-16T13:07:42.287637219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:07:42.287825 kubelet[3177]: E1216 13:07:42.287780 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:42.288130 kubelet[3177]: E1216 13:07:42.287826 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:42.288130 kubelet[3177]: E1216 13:07:42.287926 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:42.289148 kubelet[3177]: E1216 13:07:42.288330 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:46.905560 kubelet[3177]: E1216 13:07:46.905505 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:07:49.906199 kubelet[3177]: E1216 13:07:49.906087 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:07:50.906431 kubelet[3177]: E1216 13:07:50.906291 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:07:51.372497 waagent[1897]: 2025-12-16T13:07:51.372322Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Dec 16 13:07:51.383425 waagent[1897]: 2025-12-16T13:07:51.382667Z INFO ExtHandler Dec 16 13:07:51.383425 waagent[1897]: 2025-12-16T13:07:51.382767Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7b750a9a-151e-419d-a76c-cb3abe1d337c eTag: 11403687590242842928 source: Fabric] Dec 16 13:07:51.383425 waagent[1897]: 2025-12-16T13:07:51.383061Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:07:51.383709 waagent[1897]: 2025-12-16T13:07:51.383665Z INFO ExtHandler Dec 16 13:07:51.383757 waagent[1897]: 2025-12-16T13:07:51.383723Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Dec 16 13:07:51.448296 waagent[1897]: 2025-12-16T13:07:51.448255Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:07:51.513538 waagent[1897]: 2025-12-16T13:07:51.513480Z INFO ExtHandler Downloaded certificate {'thumbprint': '342A91527E17CF0AFDA707C616B2E7D57D88ABCD', 'hasPrivateKey': True} Dec 16 13:07:51.513939 waagent[1897]: 2025-12-16T13:07:51.513909Z INFO ExtHandler Fetch goal state completed Dec 16 13:07:51.514205 waagent[1897]: 2025-12-16T13:07:51.514180Z INFO ExtHandler ExtHandler Dec 16 13:07:51.514247 waagent[1897]: 2025-12-16T13:07:51.514230Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c24b0be8-8203-410f-9522-d6dc46338f11 correlation a9ff4e37-bd99-4d77-9280-4193e5c626f4 created: 2025-12-16T13:07:44.304680Z] Dec 16 13:07:51.514495 waagent[1897]: 2025-12-16T13:07:51.514468Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:07:51.514901 waagent[1897]: 2025-12-16T13:07:51.514877Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Dec 16 13:07:52.907687 kubelet[3177]: E1216 13:07:52.907638 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:07:54.905404 kubelet[3177]: E1216 13:07:54.905346 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:07:56.905536 kubelet[3177]: E1216 13:07:56.905465 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:07:59.908886 containerd[1711]: time="2025-12-16T13:07:59.908824040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:00.293887 containerd[1711]: time="2025-12-16T13:08:00.293828176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:00.297189 containerd[1711]: time="2025-12-16T13:08:00.297101897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:00.297462 containerd[1711]: time="2025-12-16T13:08:00.297142897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:00.298652 kubelet[3177]: E1216 13:08:00.298547 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:00.299796 kubelet[3177]: E1216 13:08:00.298626 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:00.299962 kubelet[3177]: E1216 13:08:00.299572 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:00.301610 containerd[1711]: time="2025-12-16T13:08:00.301585636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:00.666051 containerd[1711]: time="2025-12-16T13:08:00.665933041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:00.669420 containerd[1711]: time="2025-12-16T13:08:00.669254924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:00.669420 containerd[1711]: time="2025-12-16T13:08:00.669298740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:00.670683 kubelet[3177]: E1216 13:08:00.670611 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:00.670950 kubelet[3177]: E1216 13:08:00.670810 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:00.670950 kubelet[3177]: E1216 13:08:00.670921 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:00.671084 kubelet[3177]: E1216 13:08:00.671061 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:08:01.908107 containerd[1711]: time="2025-12-16T13:08:01.907096279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:02.281507 containerd[1711]: time="2025-12-16T13:08:02.281456259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:02.284563 containerd[1711]: time="2025-12-16T13:08:02.284516340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:02.284627 containerd[1711]: time="2025-12-16T13:08:02.284527396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:02.284807 kubelet[3177]: E1216 13:08:02.284776 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:02.285111 kubelet[3177]: E1216 13:08:02.284819 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:02.285321 kubelet[3177]: E1216 13:08:02.285297 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:02.285825 kubelet[3177]: E1216 13:08:02.285347 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:08:04.905756 containerd[1711]: time="2025-12-16T13:08:04.905674178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:05.277536 containerd[1711]: time="2025-12-16T13:08:05.277487049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:05.280899 containerd[1711]: time="2025-12-16T13:08:05.280796868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:05.280899 containerd[1711]: time="2025-12-16T13:08:05.280824728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:05.281076 kubelet[3177]: E1216 13:08:05.281030 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:05.281456 kubelet[3177]: E1216 13:08:05.281077 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:05.281456 kubelet[3177]: E1216 13:08:05.281161 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:05.281456 kubelet[3177]: E1216 13:08:05.281193 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:08:05.906664 containerd[1711]: time="2025-12-16T13:08:05.906525197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:06.268750 containerd[1711]: time="2025-12-16T13:08:06.268584459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:06.271765 containerd[1711]: time="2025-12-16T13:08:06.271697432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:06.272000 containerd[1711]: time="2025-12-16T13:08:06.271878104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:06.272378 kubelet[3177]: E1216 13:08:06.272337 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:06.272548 kubelet[3177]: E1216 13:08:06.272533 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:06.273446 kubelet[3177]: E1216 13:08:06.273419 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:06.273654 kubelet[3177]: E1216 13:08:06.273610 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:08:07.913429 containerd[1711]: time="2025-12-16T13:08:07.913210988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:08.282769 containerd[1711]: time="2025-12-16T13:08:08.282724409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:08.286805 containerd[1711]: time="2025-12-16T13:08:08.286765493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:08.287173 containerd[1711]: time="2025-12-16T13:08:08.286864595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:08.287559 kubelet[3177]: E1216 13:08:08.287469 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:08.287559 kubelet[3177]: E1216 13:08:08.287540 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:08.288495 kubelet[3177]: E1216 13:08:08.287951 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:08.290784 containerd[1711]: time="2025-12-16T13:08:08.290758169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:08.655311 containerd[1711]: time="2025-12-16T13:08:08.655176351Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:08.658187 containerd[1711]: time="2025-12-16T13:08:08.658093663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:08.658187 containerd[1711]: time="2025-12-16T13:08:08.658134524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:08.658398 kubelet[3177]: E1216 13:08:08.658355 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:08.658473 kubelet[3177]: E1216 13:08:08.658422 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:08.658530 kubelet[3177]: E1216 13:08:08.658506 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:08.658611 kubelet[3177]: E1216 13:08:08.658558 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:08:08.906287 containerd[1711]: time="2025-12-16T13:08:08.906067162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:09.294577 containerd[1711]: time="2025-12-16T13:08:09.294521849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:09.299004 containerd[1711]: time="2025-12-16T13:08:09.298877582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:09.299004 containerd[1711]: time="2025-12-16T13:08:09.298977523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:09.299406 kubelet[3177]: E1216 13:08:09.299331 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:09.299928 kubelet[3177]: E1216 13:08:09.299376 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:09.299928 kubelet[3177]: E1216 13:08:09.299771 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:09.299928 kubelet[3177]: E1216 13:08:09.299896 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:08:14.906011 kubelet[3177]: E1216 13:08:14.905939 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:08:16.904743 kubelet[3177]: E1216 13:08:16.904662 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:08:20.906576 kubelet[3177]: E1216 13:08:20.906534 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:08:20.907109 kubelet[3177]: E1216 13:08:20.906910 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:08:20.907109 kubelet[3177]: E1216 13:08:20.907063 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:08:22.906536 kubelet[3177]: E1216 13:08:22.906464 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:08:26.908955 kubelet[3177]: E1216 13:08:26.908894 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:08:27.829203 systemd[1]: Started sshd@7-10.200.0.33:22-10.200.16.10:40592.service - OpenSSH per-connection server daemon (10.200.16.10:40592). Dec 16 13:08:28.392473 sshd[5305]: Accepted publickey for core from 10.200.16.10 port 40592 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:28.393851 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:28.398333 systemd-logind[1691]: New session 10 of user core. Dec 16 13:08:28.408559 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:08:28.876358 sshd[5308]: Connection closed by 10.200.16.10 port 40592 Dec 16 13:08:28.876946 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:28.880865 systemd[1]: sshd@7-10.200.0.33:22-10.200.16.10:40592.service: Deactivated successfully. Dec 16 13:08:28.884136 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:08:28.886110 systemd-logind[1691]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:08:28.888849 systemd-logind[1691]: Removed session 10. Dec 16 13:08:31.908759 kubelet[3177]: E1216 13:08:31.908711 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:08:32.904830 kubelet[3177]: E1216 13:08:32.904747 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:08:33.907892 kubelet[3177]: E1216 13:08:33.907848 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:08:33.975661 systemd[1]: Started sshd@8-10.200.0.33:22-10.200.16.10:40530.service - OpenSSH per-connection server daemon (10.200.16.10:40530). Dec 16 13:08:34.533904 sshd[5321]: Accepted publickey for core from 10.200.16.10 port 40530 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:34.535084 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:34.538971 systemd-logind[1691]: New session 11 of user core. Dec 16 13:08:34.549522 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:08:34.905273 kubelet[3177]: E1216 13:08:34.905155 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:08:34.908260 kubelet[3177]: E1216 13:08:34.906388 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:08:35.008092 sshd[5324]: Connection closed by 10.200.16.10 port 40530 Dec 16 13:08:35.009633 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:35.012273 systemd[1]: sshd@8-10.200.0.33:22-10.200.16.10:40530.service: Deactivated successfully. Dec 16 13:08:35.014183 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:08:35.015545 systemd-logind[1691]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:08:35.017699 systemd-logind[1691]: Removed session 11. Dec 16 13:08:38.905754 kubelet[3177]: E1216 13:08:38.905689 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:08:40.108537 systemd[1]: Started sshd@9-10.200.0.33:22-10.200.16.10:46958.service - OpenSSH per-connection server daemon (10.200.16.10:46958). Dec 16 13:08:40.673115 sshd[5339]: Accepted publickey for core from 10.200.16.10 port 46958 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:40.674552 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:40.679067 systemd-logind[1691]: New session 12 of user core. Dec 16 13:08:40.686527 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:08:41.116813 sshd[5342]: Connection closed by 10.200.16.10 port 46958 Dec 16 13:08:41.117731 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:41.122556 systemd-logind[1691]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:08:41.123172 systemd[1]: sshd@9-10.200.0.33:22-10.200.16.10:46958.service: Deactivated successfully. Dec 16 13:08:41.126376 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:08:41.130216 systemd-logind[1691]: Removed session 12. Dec 16 13:08:41.213531 systemd[1]: Started sshd@10-10.200.0.33:22-10.200.16.10:46966.service - OpenSSH per-connection server daemon (10.200.16.10:46966). Dec 16 13:08:41.770604 sshd[5355]: Accepted publickey for core from 10.200.16.10 port 46966 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:41.772366 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:41.777107 systemd-logind[1691]: New session 13 of user core. Dec 16 13:08:41.784741 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:08:42.252164 sshd[5358]: Connection closed by 10.200.16.10 port 46966 Dec 16 13:08:42.252748 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:42.260385 systemd[1]: sshd@10-10.200.0.33:22-10.200.16.10:46966.service: Deactivated successfully. Dec 16 13:08:42.264306 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:08:42.266571 systemd-logind[1691]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:08:42.271845 systemd-logind[1691]: Removed session 13. Dec 16 13:08:42.351013 systemd[1]: Started sshd@11-10.200.0.33:22-10.200.16.10:46978.service - OpenSSH per-connection server daemon (10.200.16.10:46978). Dec 16 13:08:42.904218 sshd[5368]: Accepted publickey for core from 10.200.16.10 port 46978 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:42.905888 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:42.908865 containerd[1711]: time="2025-12-16T13:08:42.908326318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:42.914160 systemd-logind[1691]: New session 14 of user core. Dec 16 13:08:42.919696 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:08:43.298803 containerd[1711]: time="2025-12-16T13:08:43.298627601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:43.303453 containerd[1711]: time="2025-12-16T13:08:43.302530036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:43.303671 containerd[1711]: time="2025-12-16T13:08:43.303433448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:43.304093 kubelet[3177]: E1216 13:08:43.304046 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:43.304467 kubelet[3177]: E1216 13:08:43.304103 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:43.304467 kubelet[3177]: E1216 13:08:43.304187 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4jzfv_calico-system(ea5faaca-0d4e-431d-9277-cb31c23101e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:43.304467 kubelet[3177]: E1216 13:08:43.304220 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:08:43.402336 sshd[5371]: Connection closed by 10.200.16.10 port 46978 Dec 16 13:08:43.402946 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:43.406650 systemd-logind[1691]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:08:43.407269 systemd[1]: sshd@11-10.200.0.33:22-10.200.16.10:46978.service: Deactivated successfully. Dec 16 13:08:43.411312 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:08:43.418009 systemd-logind[1691]: Removed session 14. Dec 16 13:08:43.904876 kubelet[3177]: E1216 13:08:43.904756 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:08:45.906334 kubelet[3177]: E1216 13:08:45.906272 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:08:48.499632 systemd[1]: Started sshd@12-10.200.0.33:22-10.200.16.10:46980.service - OpenSSH per-connection server daemon (10.200.16.10:46980). Dec 16 13:08:48.905119 kubelet[3177]: E1216 13:08:48.904731 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:08:48.905691 containerd[1711]: time="2025-12-16T13:08:48.904889896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:49.065573 sshd[5420]: Accepted publickey for core from 10.200.16.10 port 46980 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:49.066788 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:49.073901 systemd-logind[1691]: New session 15 of user core. Dec 16 13:08:49.080571 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:08:49.264492 containerd[1711]: time="2025-12-16T13:08:49.264457106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:49.267491 containerd[1711]: time="2025-12-16T13:08:49.267383689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:49.267703 containerd[1711]: time="2025-12-16T13:08:49.267612721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:49.267894 kubelet[3177]: E1216 13:08:49.267855 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:49.267967 kubelet[3177]: E1216 13:08:49.267904 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:49.268510 kubelet[3177]: E1216 13:08:49.268483 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-w97rk_calico-apiserver(6a37c451-a2e0-4310-89e9-a7160f2123e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:49.268562 kubelet[3177]: E1216 13:08:49.268530 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:08:49.522493 sshd[5423]: Connection closed by 10.200.16.10 port 46980 Dec 16 13:08:49.525573 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:49.529925 systemd-logind[1691]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:08:49.530858 systemd[1]: sshd@12-10.200.0.33:22-10.200.16.10:46980.service: Deactivated successfully. Dec 16 13:08:49.534892 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:08:49.541010 systemd-logind[1691]: Removed session 15. Dec 16 13:08:53.907223 containerd[1711]: time="2025-12-16T13:08:53.906953752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:54.268450 containerd[1711]: time="2025-12-16T13:08:54.267235114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:54.273702 containerd[1711]: time="2025-12-16T13:08:54.273614363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:54.273836 containerd[1711]: time="2025-12-16T13:08:54.273661840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:54.274672 kubelet[3177]: E1216 13:08:54.274630 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:54.275045 kubelet[3177]: E1216 13:08:54.274684 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:54.275045 kubelet[3177]: E1216 13:08:54.274764 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:54.276904 containerd[1711]: time="2025-12-16T13:08:54.276706265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:54.631315 containerd[1711]: time="2025-12-16T13:08:54.630878695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:54.634545 containerd[1711]: time="2025-12-16T13:08:54.634437602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:54.634545 containerd[1711]: time="2025-12-16T13:08:54.634483594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:54.634751 kubelet[3177]: E1216 13:08:54.634701 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:54.634811 kubelet[3177]: E1216 13:08:54.634768 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:54.634918 kubelet[3177]: E1216 13:08:54.634900 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6578d4d67b-2hqzh_calico-system(9651eb18-927a-4296-81c4-78b2bf2e37f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:54.635085 kubelet[3177]: E1216 13:08:54.635048 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:08:54.722420 systemd[1]: Started sshd@13-10.200.0.33:22-10.200.16.10:52132.service - OpenSSH per-connection server daemon (10.200.16.10:52132). Dec 16 13:08:55.289281 sshd[5435]: Accepted publickey for core from 10.200.16.10 port 52132 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:08:55.290866 sshd-session[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:55.297463 systemd-logind[1691]: New session 16 of user core. Dec 16 13:08:55.305685 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:08:55.780525 sshd[5452]: Connection closed by 10.200.16.10 port 52132 Dec 16 13:08:55.782348 sshd-session[5435]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:55.786766 systemd-logind[1691]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:08:55.787641 systemd[1]: sshd@13-10.200.0.33:22-10.200.16.10:52132.service: Deactivated successfully. Dec 16 13:08:55.790431 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:08:55.793422 systemd-logind[1691]: Removed session 16. Dec 16 13:08:55.909986 kubelet[3177]: E1216 13:08:55.909103 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:08:55.910420 containerd[1711]: time="2025-12-16T13:08:55.909705771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:56.272204 containerd[1711]: time="2025-12-16T13:08:56.272075892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:56.275218 containerd[1711]: time="2025-12-16T13:08:56.275081996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:56.275218 containerd[1711]: time="2025-12-16T13:08:56.275175423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:56.275759 kubelet[3177]: E1216 13:08:56.275550 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:56.275759 kubelet[3177]: E1216 13:08:56.275605 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:56.276577 kubelet[3177]: E1216 13:08:56.276470 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-66fdf94b9c-fggbk_calico-system(9e221d7a-639b-4bcb-8508-8080960234ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:56.276577 kubelet[3177]: E1216 13:08:56.276521 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:08:56.907512 containerd[1711]: time="2025-12-16T13:08:56.907458579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:57.426113 containerd[1711]: time="2025-12-16T13:08:57.426060233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:57.431425 containerd[1711]: time="2025-12-16T13:08:57.430147521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:57.431676 containerd[1711]: time="2025-12-16T13:08:57.431519246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:57.431975 kubelet[3177]: E1216 13:08:57.431915 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:57.432599 kubelet[3177]: E1216 13:08:57.431963 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:57.432599 kubelet[3177]: E1216 13:08:57.432343 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:57.434091 containerd[1711]: time="2025-12-16T13:08:57.434055625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:57.800351 containerd[1711]: time="2025-12-16T13:08:57.800129192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:57.803123 containerd[1711]: time="2025-12-16T13:08:57.802991253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:57.803123 containerd[1711]: time="2025-12-16T13:08:57.803101773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:57.803511 kubelet[3177]: E1216 13:08:57.803465 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:57.803656 kubelet[3177]: E1216 13:08:57.803595 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:57.804089 kubelet[3177]: E1216 13:08:57.803751 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w7769_calico-system(ccba9c4c-4f0e-4c2b-88e7-422574903af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:57.804089 kubelet[3177]: E1216 13:08:57.804054 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:09:00.888653 systemd[1]: Started sshd@14-10.200.0.33:22-10.200.16.10:37778.service - OpenSSH per-connection server daemon (10.200.16.10:37778). Dec 16 13:09:01.453788 sshd[5464]: Accepted publickey for core from 10.200.16.10 port 37778 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:01.454974 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:01.459377 systemd-logind[1691]: New session 17 of user core. Dec 16 13:09:01.464537 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:09:01.894967 sshd[5474]: Connection closed by 10.200.16.10 port 37778 Dec 16 13:09:01.895776 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:01.899048 systemd[1]: sshd@14-10.200.0.33:22-10.200.16.10:37778.service: Deactivated successfully. Dec 16 13:09:01.900967 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:09:01.902054 systemd-logind[1691]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:09:01.903250 systemd-logind[1691]: Removed session 17. Dec 16 13:09:01.906537 containerd[1711]: time="2025-12-16T13:09:01.906506955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:01.995697 systemd[1]: Started sshd@15-10.200.0.33:22-10.200.16.10:37784.service - OpenSSH per-connection server daemon (10.200.16.10:37784). Dec 16 13:09:02.265911 containerd[1711]: time="2025-12-16T13:09:02.265865005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:02.268834 containerd[1711]: time="2025-12-16T13:09:02.268791246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:02.268938 containerd[1711]: time="2025-12-16T13:09:02.268895218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:02.269111 kubelet[3177]: E1216 13:09:02.269073 3177 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:02.269410 kubelet[3177]: E1216 13:09:02.269129 3177 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:02.269410 kubelet[3177]: E1216 13:09:02.269209 3177 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d6f84fc95-wd4m8_calico-apiserver(7a82fe54-15b6-44bf-9df4-aa8e33fe1999): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:02.269410 kubelet[3177]: E1216 13:09:02.269243 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:09:02.567490 sshd[5486]: Accepted publickey for core from 10.200.16.10 port 37784 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:02.569624 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:02.577430 systemd-logind[1691]: New session 18 of user core. Dec 16 13:09:02.584840 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:09:03.037281 sshd[5489]: Connection closed by 10.200.16.10 port 37784 Dec 16 13:09:03.037903 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:03.041735 systemd-logind[1691]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:09:03.041980 systemd[1]: sshd@15-10.200.0.33:22-10.200.16.10:37784.service: Deactivated successfully. Dec 16 13:09:03.044122 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:09:03.045943 systemd-logind[1691]: Removed session 18. Dec 16 13:09:03.140628 systemd[1]: Started sshd@16-10.200.0.33:22-10.200.16.10:37786.service - OpenSSH per-connection server daemon (10.200.16.10:37786). Dec 16 13:09:03.700118 sshd[5499]: Accepted publickey for core from 10.200.16.10 port 37786 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:03.701132 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:03.708952 systemd-logind[1691]: New session 19 of user core. Dec 16 13:09:03.712710 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:09:03.910866 kubelet[3177]: E1216 13:09:03.910500 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:09:04.856125 sshd[5502]: Connection closed by 10.200.16.10 port 37786 Dec 16 13:09:04.856588 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:04.860951 systemd[1]: sshd@16-10.200.0.33:22-10.200.16.10:37786.service: Deactivated successfully. Dec 16 13:09:04.861206 systemd-logind[1691]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:09:04.862981 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:09:04.864276 systemd-logind[1691]: Removed session 19. Dec 16 13:09:04.955823 systemd[1]: Started sshd@17-10.200.0.33:22-10.200.16.10:37798.service - OpenSSH per-connection server daemon (10.200.16.10:37798). Dec 16 13:09:05.515251 sshd[5518]: Accepted publickey for core from 10.200.16.10 port 37798 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:05.516416 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:05.520875 systemd-logind[1691]: New session 20 of user core. Dec 16 13:09:05.525565 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:09:06.173084 sshd[5521]: Connection closed by 10.200.16.10 port 37798 Dec 16 13:09:06.174599 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:06.179706 systemd-logind[1691]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:09:06.181139 systemd[1]: sshd@17-10.200.0.33:22-10.200.16.10:37798.service: Deactivated successfully. Dec 16 13:09:06.184675 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:09:06.187125 systemd-logind[1691]: Removed session 20. Dec 16 13:09:06.277467 systemd[1]: Started sshd@18-10.200.0.33:22-10.200.16.10:37806.service - OpenSSH per-connection server daemon (10.200.16.10:37806). Dec 16 13:09:06.856622 sshd[5533]: Accepted publickey for core from 10.200.16.10 port 37806 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:06.857923 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:06.862326 systemd-logind[1691]: New session 21 of user core. Dec 16 13:09:06.866528 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:09:06.905861 kubelet[3177]: E1216 13:09:06.905824 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:09:06.906554 kubelet[3177]: E1216 13:09:06.905992 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:09:07.301769 sshd[5536]: Connection closed by 10.200.16.10 port 37806 Dec 16 13:09:07.302372 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:07.305917 systemd[1]: sshd@18-10.200.0.33:22-10.200.16.10:37806.service: Deactivated successfully. Dec 16 13:09:07.308202 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:09:07.309484 systemd-logind[1691]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:09:07.310423 systemd-logind[1691]: Removed session 21. Dec 16 13:09:11.906451 kubelet[3177]: E1216 13:09:11.905778 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:09:12.405655 systemd[1]: Started sshd@19-10.200.0.33:22-10.200.16.10:56406.service - OpenSSH per-connection server daemon (10.200.16.10:56406). Dec 16 13:09:12.905062 kubelet[3177]: E1216 13:09:12.904981 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:09:12.905910 kubelet[3177]: E1216 13:09:12.905867 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:09:12.971590 sshd[5550]: Accepted publickey for core from 10.200.16.10 port 56406 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:12.972760 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:12.977451 systemd-logind[1691]: New session 22 of user core. Dec 16 13:09:12.983627 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:09:13.420866 sshd[5553]: Connection closed by 10.200.16.10 port 56406 Dec 16 13:09:13.421430 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:13.424824 systemd[1]: sshd@19-10.200.0.33:22-10.200.16.10:56406.service: Deactivated successfully. Dec 16 13:09:13.427294 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:09:13.428135 systemd-logind[1691]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:09:13.429312 systemd-logind[1691]: Removed session 22. Dec 16 13:09:18.527656 systemd[1]: Started sshd@20-10.200.0.33:22-10.200.16.10:56410.service - OpenSSH per-connection server daemon (10.200.16.10:56410). Dec 16 13:09:18.911540 kubelet[3177]: E1216 13:09:18.911192 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:09:18.912815 kubelet[3177]: E1216 13:09:18.911375 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:09:19.090954 sshd[5591]: Accepted publickey for core from 10.200.16.10 port 56410 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:19.092889 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:19.100236 systemd-logind[1691]: New session 23 of user core. Dec 16 13:09:19.104309 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:09:19.530547 sshd[5594]: Connection closed by 10.200.16.10 port 56410 Dec 16 13:09:19.531110 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:19.534466 systemd[1]: sshd@20-10.200.0.33:22-10.200.16.10:56410.service: Deactivated successfully. Dec 16 13:09:19.536106 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:09:19.537135 systemd-logind[1691]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:09:19.538489 systemd-logind[1691]: Removed session 23. Dec 16 13:09:21.908430 kubelet[3177]: E1216 13:09:21.907652 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4jzfv" podUID="ea5faaca-0d4e-431d-9277-cb31c23101e9" Dec 16 13:09:23.907217 kubelet[3177]: E1216 13:09:23.907149 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66fdf94b9c-fggbk" podUID="9e221d7a-639b-4bcb-8508-8080960234ac" Dec 16 13:09:24.633773 systemd[1]: Started sshd@21-10.200.0.33:22-10.200.16.10:41698.service - OpenSSH per-connection server daemon (10.200.16.10:41698). Dec 16 13:09:24.905992 kubelet[3177]: E1216 13:09:24.905674 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-wd4m8" podUID="7a82fe54-15b6-44bf-9df4-aa8e33fe1999" Dec 16 13:09:25.192506 sshd[5607]: Accepted publickey for core from 10.200.16.10 port 41698 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:25.194324 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:25.199713 systemd-logind[1691]: New session 24 of user core. Dec 16 13:09:25.206570 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:09:25.663383 sshd[5610]: Connection closed by 10.200.16.10 port 41698 Dec 16 13:09:25.664130 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:25.668356 systemd[1]: sshd@21-10.200.0.33:22-10.200.16.10:41698.service: Deactivated successfully. Dec 16 13:09:25.672157 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:09:25.673286 systemd-logind[1691]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:09:25.675388 systemd-logind[1691]: Removed session 24. Dec 16 13:09:26.907945 kubelet[3177]: E1216 13:09:26.907889 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w7769" podUID="ccba9c4c-4f0e-4c2b-88e7-422574903af0" Dec 16 13:09:29.908268 kubelet[3177]: E1216 13:09:29.908146 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d6f84fc95-w97rk" podUID="6a37c451-a2e0-4310-89e9-a7160f2123e5" Dec 16 13:09:29.909548 kubelet[3177]: E1216 13:09:29.909467 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6578d4d67b-2hqzh" podUID="9651eb18-927a-4296-81c4-78b2bf2e37f4" Dec 16 13:09:30.767728 systemd[1]: Started sshd@22-10.200.0.33:22-10.200.16.10:54180.service - OpenSSH per-connection server daemon (10.200.16.10:54180). Dec 16 13:09:31.328850 sshd[5622]: Accepted publickey for core from 10.200.16.10 port 54180 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:31.330308 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:31.334769 systemd-logind[1691]: New session 25 of user core. Dec 16 13:09:31.341909 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:09:31.862890 sshd[5625]: Connection closed by 10.200.16.10 port 54180 Dec 16 13:09:31.861620 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:31.864915 systemd-logind[1691]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:09:31.866704 systemd[1]: sshd@22-10.200.0.33:22-10.200.16.10:54180.service: Deactivated successfully. Dec 16 13:09:31.869281 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:09:31.871968 systemd-logind[1691]: Removed session 25.