Dec 16 13:06:14.956232 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:06:14.956250 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:06:14.956258 kernel: BIOS-provided physical RAM map: Dec 16 13:06:14.956263 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:06:14.956267 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:06:14.956271 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:06:14.956276 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:06:14.956280 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:06:14.956285 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:06:14.956291 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:06:14.956295 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:06:14.956299 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:06:14.956303 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:06:14.956307 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:06:14.956313 kernel: NX (Execute Disable) protection: active Dec 16 13:06:14.956319 kernel: APIC: Static calls initialized Dec 16 13:06:14.956323 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:06:14.956328 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Dec 16 13:06:14.956332 kernel: random: crng init done Dec 16 13:06:14.956337 kernel: secureboot: Secure boot disabled Dec 16 13:06:14.956341 kernel: SMBIOS 3.1.0 present. Dec 16 13:06:14.956346 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:06:14.956367 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:06:14.956375 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:06:14.956381 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:06:14.956386 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:06:14.956392 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:06:14.956396 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:06:14.956400 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:06:14.956405 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:06:14.956409 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:06:14.956414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:06:14.956419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:06:14.956424 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:06:14.956429 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:06:14.956434 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:06:14.956441 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:06:14.956445 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:06:14.956450 kernel: Using GB pages for direct mapping Dec 16 13:06:14.956454 kernel: ACPI: Early table checksum verification disabled Dec 16 13:06:14.956461 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:06:14.956466 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956472 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956477 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:06:14.956482 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:06:14.956487 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956491 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956496 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956501 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:06:14.956507 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:06:14.956511 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:06:14.956516 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:06:14.956521 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:06:14.956526 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:06:14.956530 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:06:14.956535 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:06:14.956540 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:06:14.956544 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:06:14.956550 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:06:14.956555 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:06:14.956560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:06:14.956565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:06:14.956570 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:06:14.956575 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:06:14.956580 kernel: Zone ranges: Dec 16 13:06:14.956584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:06:14.956589 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:06:14.956595 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:06:14.956600 kernel: Device empty Dec 16 13:06:14.956605 kernel: Movable zone start for each node Dec 16 13:06:14.956610 kernel: Early memory node ranges Dec 16 13:06:14.956614 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:06:14.956619 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:06:14.956624 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:06:14.956628 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:06:14.956633 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:06:14.956639 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:06:14.956644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:06:14.956649 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:06:14.956653 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:06:14.956658 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:06:14.956663 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:06:14.956668 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:06:14.956672 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:06:14.956677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:06:14.956683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:06:14.956688 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:06:14.956693 kernel: TSC deadline timer available Dec 16 13:06:14.956697 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:06:14.956702 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:06:14.956707 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:06:14.956712 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:06:14.956716 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:06:14.956721 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:06:14.956725 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:06:14.956732 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:06:14.956736 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:06:14.956741 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:06:14.956746 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:06:14.956751 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:06:14.956756 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:06:14.956760 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:06:14.956765 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:06:14.956771 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:06:14.956777 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:06:14.956782 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:06:14.956787 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:06:14.956792 kernel: Fallback order for Node 0: 0 Dec 16 13:06:14.956796 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:06:14.956801 kernel: Policy zone: Normal Dec 16 13:06:14.956806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:06:14.956811 kernel: software IO TLB: area num 2. Dec 16 13:06:14.956817 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:06:14.956821 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:06:14.956826 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:06:14.956831 kernel: Dynamic Preempt: voluntary Dec 16 13:06:14.956836 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:06:14.956841 kernel: rcu: RCU event tracing is enabled. Dec 16 13:06:14.956851 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:06:14.956858 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:06:14.956863 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:06:14.956868 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:06:14.956873 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:06:14.956880 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:06:14.956885 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:14.956890 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:14.956895 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:14.956900 kernel: Using NULL legacy PIC Dec 16 13:06:14.956907 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:06:14.956912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:06:14.956917 kernel: Console: colour dummy device 80x25 Dec 16 13:06:14.956922 kernel: printk: legacy console [tty1] enabled Dec 16 13:06:14.956927 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:06:14.956933 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:06:14.956938 kernel: ACPI: Core revision 20240827 Dec 16 13:06:14.956943 kernel: Failed to register legacy timer interrupt Dec 16 13:06:14.956948 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:06:14.956954 kernel: x2apic enabled Dec 16 13:06:14.956959 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:06:14.956965 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:06:14.956970 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:06:14.956975 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:06:14.956980 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:06:14.956985 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:06:14.956990 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:06:14.956995 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:06:14.957002 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:06:14.957007 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:06:14.957012 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:06:14.957018 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:06:14.957023 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:06:14.957028 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:06:14.957033 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:06:14.957038 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:06:14.957044 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:06:14.957050 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:06:14.957055 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:06:14.957060 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:06:14.957065 kernel: RETBleed: Vulnerable Dec 16 13:06:14.957070 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:06:14.957075 kernel: active return thunk: its_return_thunk Dec 16 13:06:14.957080 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:06:14.957085 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:06:14.957090 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:06:14.957095 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:06:14.957101 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:06:14.957107 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:06:14.957112 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:06:14.957117 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:06:14.957122 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:06:14.957127 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:06:14.957132 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:06:14.957137 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:06:14.957142 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:06:14.957147 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:06:14.957151 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:06:14.957157 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:06:14.957163 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:06:14.957168 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:06:14.957173 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:06:14.957178 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:06:14.957183 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:06:14.957189 kernel: landlock: Up and running. Dec 16 13:06:14.957194 kernel: SELinux: Initializing. Dec 16 13:06:14.957199 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:06:14.957204 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:06:14.957209 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:06:14.957214 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:06:14.957219 kernel: signal: max sigframe size: 11952 Dec 16 13:06:14.957226 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:06:14.957231 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:06:14.957236 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:06:14.957241 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:06:14.957246 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:06:14.957251 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:06:14.957256 kernel: .... node #0, CPUs: #1 Dec 16 13:06:14.957261 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:06:14.957267 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:06:14.957273 kernel: Memory: 8068828K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308184K reserved, 0K cma-reserved) Dec 16 13:06:14.957279 kernel: devtmpfs: initialized Dec 16 13:06:14.957284 kernel: x86/mm: Memory block size: 128MB Dec 16 13:06:14.957289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:06:14.957294 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:06:14.957299 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:06:14.957304 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:06:14.957309 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:06:14.957315 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:06:14.957321 kernel: audit: type=2000 audit(1765890372.074:1): state=initialized audit_enabled=0 res=1 Dec 16 13:06:14.957326 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:06:14.957331 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:06:14.957336 kernel: cpuidle: using governor menu Dec 16 13:06:14.957341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:06:14.957346 kernel: dca service started, version 1.12.1 Dec 16 13:06:14.957363 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:06:14.957371 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:06:14.957379 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:06:14.957387 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:06:14.957395 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:06:14.957404 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:06:14.957412 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:06:14.957421 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:06:14.957430 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:06:14.957438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:06:14.957448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:06:14.957458 kernel: ACPI: Interpreter enabled Dec 16 13:06:14.957466 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:06:14.957475 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:06:14.957484 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:06:14.957493 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:06:14.957501 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:06:14.957510 kernel: iommu: Default domain type: Translated Dec 16 13:06:14.957518 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:06:14.957526 kernel: efivars: Registered efivars operations Dec 16 13:06:14.957537 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:06:14.957546 kernel: PCI: System does not support PCI Dec 16 13:06:14.957555 kernel: vgaarb: loaded Dec 16 13:06:14.957564 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:06:14.957573 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:06:14.957581 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:06:14.957589 kernel: pnp: PnP ACPI init Dec 16 13:06:14.957598 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:06:14.957606 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:06:14.957615 kernel: NET: Registered PF_INET protocol family Dec 16 13:06:14.957625 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:06:14.957634 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:06:14.957643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:06:14.957652 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:06:14.957660 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:06:14.957670 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:06:14.957678 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:06:14.957688 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:06:14.957698 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:06:14.957707 kernel: NET: Registered PF_XDP protocol family Dec 16 13:06:14.957715 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:06:14.957724 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:06:14.957733 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Dec 16 13:06:14.957742 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:06:14.957751 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:06:14.957760 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:06:14.957769 kernel: clocksource: Switched to clocksource tsc Dec 16 13:06:14.957780 kernel: Initialise system trusted keyrings Dec 16 13:06:14.957790 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:06:14.957799 kernel: Key type asymmetric registered Dec 16 13:06:14.957808 kernel: Asymmetric key parser 'x509' registered Dec 16 13:06:14.957817 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:06:14.957825 kernel: io scheduler mq-deadline registered Dec 16 13:06:14.957834 kernel: io scheduler kyber registered Dec 16 13:06:14.957842 kernel: io scheduler bfq registered Dec 16 13:06:14.957850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:06:14.957860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:06:14.957868 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:06:14.957876 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:06:14.957884 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:06:14.957892 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:06:14.958021 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:06:14.958101 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:06:14 UTC (1765890374) Dec 16 13:06:14.958174 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:06:14.958187 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:06:14.958196 kernel: efifb: probing for efifb Dec 16 13:06:14.958206 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:06:14.958216 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:06:14.958225 kernel: efifb: scrolling: redraw Dec 16 13:06:14.958235 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:06:14.958244 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:06:14.958253 kernel: fb0: EFI VGA frame buffer device Dec 16 13:06:14.958263 kernel: pstore: Using crash dump compression: deflate Dec 16 13:06:14.958274 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:06:14.958283 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:06:14.958293 kernel: Segment Routing with IPv6 Dec 16 13:06:14.958302 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:06:14.958311 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:06:14.958320 kernel: Key type dns_resolver registered Dec 16 13:06:14.958330 kernel: IPI shorthand broadcast: enabled Dec 16 13:06:14.958339 kernel: sched_clock: Marking stable (2959004426, 95612991)->(3385655276, -331037859) Dec 16 13:06:14.958348 kernel: registered taskstats version 1 Dec 16 13:06:14.960442 kernel: Loading compiled-in X.509 certificates Dec 16 13:06:14.960453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:06:14.960462 kernel: Demotion targets for Node 0: null Dec 16 13:06:14.960471 kernel: Key type .fscrypt registered Dec 16 13:06:14.960479 kernel: Key type fscrypt-provisioning registered Dec 16 13:06:14.960488 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:06:14.960497 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:06:14.960506 kernel: ima: No architecture policies found Dec 16 13:06:14.960514 kernel: clk: Disabling unused clocks Dec 16 13:06:14.961377 kernel: Warning: unable to open an initial console. Dec 16 13:06:14.961391 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:06:14.961400 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:06:14.961409 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:06:14.961417 kernel: Run /init as init process Dec 16 13:06:14.961425 kernel: with arguments: Dec 16 13:06:14.961434 kernel: /init Dec 16 13:06:14.961443 kernel: with environment: Dec 16 13:06:14.961451 kernel: HOME=/ Dec 16 13:06:14.961461 kernel: TERM=linux Dec 16 13:06:14.961472 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:06:14.961485 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:06:14.961495 systemd[1]: Detected virtualization microsoft. Dec 16 13:06:14.961504 systemd[1]: Detected architecture x86-64. Dec 16 13:06:14.961513 systemd[1]: Running in initrd. Dec 16 13:06:14.961522 systemd[1]: No hostname configured, using default hostname. Dec 16 13:06:14.961533 systemd[1]: Hostname set to . Dec 16 13:06:14.961543 systemd[1]: Initializing machine ID from random generator. Dec 16 13:06:14.961551 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:06:14.961560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:14.961569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:14.961579 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:06:14.961589 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:06:14.961598 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:06:14.961610 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:06:14.961621 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:06:14.961630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:06:14.961639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:14.961647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:14.961656 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:06:14.961665 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:06:14.961676 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:06:14.961685 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:06:14.961695 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:06:14.961704 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:06:14.961713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:06:14.961721 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:06:14.961731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:14.961740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:14.961750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:14.961761 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:06:14.961771 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:06:14.961780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:06:14.961789 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:06:14.961799 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:06:14.961808 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:06:14.961818 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:06:14.961827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:06:14.961870 systemd-journald[186]: Collecting audit messages is disabled. Dec 16 13:06:14.961895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:14.961908 systemd-journald[186]: Journal started Dec 16 13:06:14.961932 systemd-journald[186]: Runtime Journal (/run/log/journal/c5fee8aa37a34b60bf7fa17125fb600d) is 8M, max 158.6M, 150.6M free. Dec 16 13:06:14.966380 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:06:14.971665 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:06:14.973911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:14.976409 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:06:14.980457 systemd-modules-load[187]: Inserted module 'overlay' Dec 16 13:06:14.981891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:06:14.994465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:06:15.013123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:15.023431 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:06:15.023455 kernel: Bridge firewalling registered Dec 16 13:06:15.016051 systemd-tmpfiles[199]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:06:15.021197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:06:15.021750 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 16 13:06:15.024307 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:15.028613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:15.035860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:06:15.041333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:15.055453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:06:15.069451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:15.073260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:06:15.077489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:15.081015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:06:15.094458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:06:15.107691 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:06:15.136932 systemd-resolved[225]: Positive Trust Anchors: Dec 16 13:06:15.138890 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:06:15.142253 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:06:15.157226 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 16 13:06:15.160368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:06:15.166447 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:15.181368 kernel: SCSI subsystem initialized Dec 16 13:06:15.189368 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:06:15.197367 kernel: iscsi: registered transport (tcp) Dec 16 13:06:15.215372 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:06:15.215411 kernel: QLogic iSCSI HBA Driver Dec 16 13:06:15.228293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:06:15.242100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:15.242590 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:06:15.273186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:06:15.275468 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:06:15.315375 kernel: raid6: avx512x4 gen() 46464 MB/s Dec 16 13:06:15.333365 kernel: raid6: avx512x2 gen() 45426 MB/s Dec 16 13:06:15.350362 kernel: raid6: avx512x1 gen() 27506 MB/s Dec 16 13:06:15.368365 kernel: raid6: avx2x4 gen() 36911 MB/s Dec 16 13:06:15.385361 kernel: raid6: avx2x2 gen() 40130 MB/s Dec 16 13:06:15.403123 kernel: raid6: avx2x1 gen() 32399 MB/s Dec 16 13:06:15.403203 kernel: raid6: using algorithm avx512x4 gen() 46464 MB/s Dec 16 13:06:15.422560 kernel: raid6: .... xor() 7709 MB/s, rmw enabled Dec 16 13:06:15.422579 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:06:15.439369 kernel: xor: automatically using best checksumming function avx Dec 16 13:06:15.554371 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:06:15.558524 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:06:15.561472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:15.580121 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 16 13:06:15.583675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:15.590745 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:06:15.608926 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Dec 16 13:06:15.626992 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:06:15.630893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:06:15.659273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:15.666465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:06:15.707370 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:06:15.721939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:15.722048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:15.728691 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:15.734580 kernel: AES CTR mode by8 optimization enabled Dec 16 13:06:15.735733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:15.739644 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:06:15.777486 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:06:15.775892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:15.775968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:15.781412 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:06:15.791290 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 16 13:06:15.791323 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:06:15.791498 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:06:15.791511 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:06:15.793632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:15.800562 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:06:15.809961 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:06:15.810109 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:06:15.810215 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9a677e (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:06:15.821436 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:06:15.826552 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:06:15.830418 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:06:15.839875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:15.846199 kernel: PTP clock support registered Dec 16 13:06:15.849634 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:06:15.849828 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:06:15.873240 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:06:15.873279 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:06:15.873292 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:06:15.877433 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:06:15.881637 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:06:15.881667 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:06:15.881679 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:06:16.157809 systemd-resolved[225]: Clock change detected. Flushing caches. Dec 16 13:06:16.164302 kernel: scsi host0: storvsc_host_t Dec 16 13:06:16.164359 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:06:16.169554 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 16 13:06:16.169586 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:06:16.171469 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:06:16.183942 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:06:16.184135 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:06:16.335771 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:06:16.342697 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:06:16.347930 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:06:16.348121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:06:16.348675 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:06:16.366708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:06:16.382687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:06:16.848771 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:06:17.094421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:06:17.103953 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:06:17.123947 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:06:17.133097 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:06:17.134889 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:06:17.156116 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:06:17.135273 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:06:17.136476 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:06:17.136731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:17.169231 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:06:17.169368 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:06:17.169469 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:06:17.136753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:06:17.177757 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:06:17.177779 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:06:17.137562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:06:17.183675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:06:17.183696 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:06:17.140758 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:06:17.191052 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:06:17.165760 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:06:17.195725 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:06:17.212725 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:06:17.212887 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:06:17.216855 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:06:17.220913 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:06:17.236338 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:06:17.239704 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9a677e eth0: VF registering: eth1 Dec 16 13:06:17.239847 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:06:17.251679 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:06:18.187502 disk-uuid[673]: The operation has completed successfully. Dec 16 13:06:18.190698 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:06:18.244846 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:06:18.244935 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:06:18.276295 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:06:18.289865 sh[709]: Success Dec 16 13:06:18.320973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:06:18.321034 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:06:18.322090 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:06:18.330684 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:06:18.582294 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:06:18.587750 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:06:18.600733 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:06:18.611695 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (722) Dec 16 13:06:18.611732 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:06:18.614007 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:18.986789 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:06:18.986874 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:06:18.987980 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:06:19.022302 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:06:19.026096 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:06:19.027555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:06:19.028273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:06:19.033771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:06:19.061647 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (755) Dec 16 13:06:19.061707 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:06:19.064723 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:19.095850 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:19.095884 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:06:19.097335 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:19.103791 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:06:19.104127 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:06:19.108783 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:06:19.120761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:06:19.123409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:06:19.145646 systemd-networkd[891]: lo: Link UP Dec 16 13:06:19.145655 systemd-networkd[891]: lo: Gained carrier Dec 16 13:06:19.151620 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:06:19.146676 systemd-networkd[891]: Enumeration completed Dec 16 13:06:19.156563 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:06:19.156767 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9a677e eth0: Data path switched to VF: enP30832s1 Dec 16 13:06:19.147048 systemd-networkd[891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:19.147051 systemd-networkd[891]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:19.147355 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:06:19.155174 systemd[1]: Reached target network.target - Network. Dec 16 13:06:19.157013 systemd-networkd[891]: enP30832s1: Link UP Dec 16 13:06:19.157078 systemd-networkd[891]: eth0: Link UP Dec 16 13:06:19.157216 systemd-networkd[891]: eth0: Gained carrier Dec 16 13:06:19.157226 systemd-networkd[891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:19.163314 systemd-networkd[891]: enP30832s1: Gained carrier Dec 16 13:06:19.168690 systemd-networkd[891]: eth0: DHCPv4 address 10.200.0.12/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:06:20.730872 systemd-networkd[891]: eth0: Gained IPv6LL Dec 16 13:06:21.471960 ignition[880]: Ignition 2.22.0 Dec 16 13:06:21.471974 ignition[880]: Stage: fetch-offline Dec 16 13:06:21.472089 ignition[880]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:21.475356 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:06:21.472096 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:21.472184 ignition[880]: parsed url from cmdline: "" Dec 16 13:06:21.482551 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:06:21.472187 ignition[880]: no config URL provided Dec 16 13:06:21.472195 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:06:21.472200 ignition[880]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:06:21.472205 ignition[880]: failed to fetch config: resource requires networking Dec 16 13:06:21.472444 ignition[880]: Ignition finished successfully Dec 16 13:06:21.514297 ignition[900]: Ignition 2.22.0 Dec 16 13:06:21.514307 ignition[900]: Stage: fetch Dec 16 13:06:21.514526 ignition[900]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:21.514533 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:21.514611 ignition[900]: parsed url from cmdline: "" Dec 16 13:06:21.514614 ignition[900]: no config URL provided Dec 16 13:06:21.514625 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:06:21.514630 ignition[900]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:06:21.514650 ignition[900]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:06:21.579025 ignition[900]: GET result: OK Dec 16 13:06:21.579882 ignition[900]: config has been read from IMDS userdata Dec 16 13:06:21.579914 ignition[900]: parsing config with SHA512: 719b5c4b7c230986e93d38a989fe090bb71caf247cbfea7303dba4f513a05bf33ec53bc5826303f5478acfabc1b9953ddfcfc4de2404f18e45436fe0f336bf8a Dec 16 13:06:21.584207 unknown[900]: fetched base config from "system" Dec 16 13:06:21.585352 ignition[900]: fetch: fetch complete Dec 16 13:06:21.584216 unknown[900]: fetched base config from "system" Dec 16 13:06:21.585359 ignition[900]: fetch: fetch passed Dec 16 13:06:21.584221 unknown[900]: fetched user config from "azure" Dec 16 13:06:21.585407 ignition[900]: Ignition finished successfully Dec 16 13:06:21.587473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:06:21.592133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:06:21.617403 ignition[907]: Ignition 2.22.0 Dec 16 13:06:21.617414 ignition[907]: Stage: kargs Dec 16 13:06:21.617606 ignition[907]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:21.617613 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:21.618410 ignition[907]: kargs: kargs passed Dec 16 13:06:21.622648 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:06:21.618444 ignition[907]: Ignition finished successfully Dec 16 13:06:21.626120 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:06:21.649692 ignition[913]: Ignition 2.22.0 Dec 16 13:06:21.649703 ignition[913]: Stage: disks Dec 16 13:06:21.649898 ignition[913]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:21.651884 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:06:21.649905 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:21.650861 ignition[913]: disks: disks passed Dec 16 13:06:21.650899 ignition[913]: Ignition finished successfully Dec 16 13:06:21.658436 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:06:21.659692 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:06:21.663714 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:06:21.667382 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:06:21.672135 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:06:21.676556 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:06:21.777858 systemd-fsck[922]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 13:06:21.781929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:06:21.783290 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:06:22.097681 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:06:22.097893 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:06:22.102102 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:06:22.118542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:06:22.124228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:06:22.131848 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:06:22.137007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:06:22.137039 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:06:22.144403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:06:22.146964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:06:22.153580 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (931) Dec 16 13:06:22.156635 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:06:22.156676 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:22.161691 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:22.161843 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:06:22.161859 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:22.163388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:06:22.734449 coreos-metadata[933]: Dec 16 13:06:22.734 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:06:22.739333 coreos-metadata[933]: Dec 16 13:06:22.739 INFO Fetch successful Dec 16 13:06:22.740754 coreos-metadata[933]: Dec 16 13:06:22.739 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:06:22.747779 coreos-metadata[933]: Dec 16 13:06:22.747 INFO Fetch successful Dec 16 13:06:22.763911 coreos-metadata[933]: Dec 16 13:06:22.763 INFO wrote hostname ci-4459.2.2-a-ace8908665 to /sysroot/etc/hostname Dec 16 13:06:22.765773 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:06:23.014548 initrd-setup-root[964]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:06:23.059541 initrd-setup-root[971]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:06:23.081114 initrd-setup-root[978]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:06:23.085607 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:06:23.848339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:06:23.853740 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:06:23.857463 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:06:23.868021 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:06:23.873685 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:06:23.886415 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:06:23.902472 ignition[1055]: INFO : Ignition 2.22.0 Dec 16 13:06:23.902472 ignition[1055]: INFO : Stage: mount Dec 16 13:06:23.905999 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:23.905999 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:23.905999 ignition[1055]: INFO : mount: mount passed Dec 16 13:06:23.905999 ignition[1055]: INFO : Ignition finished successfully Dec 16 13:06:23.904812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:06:23.912418 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:06:23.928535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:06:23.952678 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1064) Dec 16 13:06:23.952712 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:06:23.957719 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:23.961955 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:23.961996 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:06:23.962894 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:23.964606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:06:23.992637 ignition[1081]: INFO : Ignition 2.22.0 Dec 16 13:06:23.993864 ignition[1081]: INFO : Stage: files Dec 16 13:06:23.995244 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:23.996588 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:23.998702 ignition[1081]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:06:24.000539 ignition[1081]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:06:24.002305 ignition[1081]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:06:24.069141 ignition[1081]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:06:24.072742 ignition[1081]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:06:24.072742 ignition[1081]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:06:24.071198 unknown[1081]: wrote ssh authorized keys file for user: core Dec 16 13:06:24.087108 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:06:24.091739 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:06:24.129761 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:06:24.237808 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:06:24.241740 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:06:24.268699 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:06:24.563525 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:06:24.768959 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:06:24.768959 ignition[1081]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:06:24.826118 ignition[1081]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:06:24.841316 ignition[1081]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:06:24.841316 ignition[1081]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:06:24.847733 ignition[1081]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:06:24.847733 ignition[1081]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:06:24.847733 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:06:24.847733 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:06:24.847733 ignition[1081]: INFO : files: files passed Dec 16 13:06:24.847733 ignition[1081]: INFO : Ignition finished successfully Dec 16 13:06:24.844483 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:06:24.850865 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:06:24.854923 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:06:24.868854 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:06:24.877720 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:24.877720 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:24.868937 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:06:24.891751 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:24.880268 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:06:24.883871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:06:24.887007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:06:24.924897 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:06:24.924976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:06:24.928364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:06:24.932727 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:06:24.932993 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:06:24.935650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:06:24.974577 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:06:24.978385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:06:24.996980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:24.999789 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:25.002979 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:06:25.004298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:06:25.004404 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:06:25.004826 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:06:25.005092 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:06:25.005366 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:06:25.005622 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:06:25.006237 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:06:25.007063 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:06:25.007238 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:06:25.020806 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:06:25.022126 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:06:25.022441 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:06:25.023055 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:06:25.023168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:06:25.023264 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:06:25.049332 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:25.052211 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:25.055759 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:06:25.056040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:25.058539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:06:25.058681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:06:25.065048 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:06:25.065184 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:06:25.065442 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:06:25.065555 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:06:25.065828 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:06:25.065933 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:06:25.067841 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:06:25.071745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:06:25.072159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:06:25.072267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:25.072849 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:06:25.072950 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:06:25.083557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:06:25.099162 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:06:25.111537 ignition[1135]: INFO : Ignition 2.22.0 Dec 16 13:06:25.111537 ignition[1135]: INFO : Stage: umount Dec 16 13:06:25.111537 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:25.111537 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:06:25.127720 ignition[1135]: INFO : umount: umount passed Dec 16 13:06:25.127720 ignition[1135]: INFO : Ignition finished successfully Dec 16 13:06:25.113152 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:06:25.113226 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:06:25.116023 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:06:25.116105 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:06:25.118350 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:06:25.118384 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:06:25.122127 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:06:25.122192 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:06:25.126109 systemd[1]: Stopped target network.target - Network. Dec 16 13:06:25.129174 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:06:25.131405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:06:25.137750 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:06:25.141727 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:06:25.145883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:25.147527 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:06:25.151704 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:06:25.154116 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:06:25.154156 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:06:25.155636 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:06:25.155677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:06:25.156844 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:06:25.156886 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:06:25.158339 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:06:25.158375 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:06:25.177457 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:06:25.180027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:06:25.182999 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:06:25.185392 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:06:25.185485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:06:25.189276 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:06:25.189375 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:06:25.191465 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:06:25.191496 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:25.194826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:06:25.199572 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:06:25.199628 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:06:25.201560 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:25.202232 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:06:25.202315 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:06:25.252349 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9a677e eth0: Data path switched from VF: enP30832s1 Dec 16 13:06:25.252851 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:06:25.208379 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:06:25.208862 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:06:25.208927 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:25.214190 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:06:25.214227 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:25.218766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:06:25.218810 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:25.223769 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:06:25.223812 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:06:25.224016 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:06:25.224105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:25.229043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:06:25.229113 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:25.231734 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:06:25.231761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:25.231909 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:06:25.231942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:06:25.232243 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:06:25.232277 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:06:25.232751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:06:25.232782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:06:25.237415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:06:25.250086 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:06:25.250137 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:25.255054 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:06:25.255105 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:25.269694 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:06:25.269742 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:06:25.272719 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:06:25.272754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:25.287915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:25.287965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:25.291989 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:06:25.292033 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:06:25.292053 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:06:25.292078 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:06:25.292365 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:06:25.292447 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:06:25.295557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:06:25.295629 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:06:25.452643 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:06:25.452745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:06:25.456737 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:06:25.457178 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:06:25.457218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:06:25.466330 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:06:25.482764 systemd[1]: Switching root. Dec 16 13:06:25.557260 systemd-journald[186]: Journal stopped Dec 16 13:06:29.704096 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 16 13:06:29.704128 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:06:29.704143 kernel: SELinux: policy capability open_perms=1 Dec 16 13:06:29.704152 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:06:29.704160 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:06:29.704168 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:06:29.704178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:06:29.704187 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:06:29.704197 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:06:29.704204 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:06:29.704213 kernel: audit: type=1403 audit(1765890386.765:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:06:29.704224 systemd[1]: Successfully loaded SELinux policy in 189.019ms. Dec 16 13:06:29.704235 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.255ms. Dec 16 13:06:29.704247 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:06:29.704259 systemd[1]: Detected virtualization microsoft. Dec 16 13:06:29.704269 systemd[1]: Detected architecture x86-64. Dec 16 13:06:29.704279 systemd[1]: Detected first boot. Dec 16 13:06:29.704290 systemd[1]: Hostname set to . Dec 16 13:06:29.704300 systemd[1]: Initializing machine ID from random generator. Dec 16 13:06:29.704311 zram_generator::config[1178]: No configuration found. Dec 16 13:06:29.704322 kernel: Guest personality initialized and is inactive Dec 16 13:06:29.704332 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:06:29.704342 kernel: Initialized host personality Dec 16 13:06:29.704351 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:06:29.704362 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:06:29.704373 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:06:29.704383 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:06:29.704392 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:06:29.704403 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:06:29.704413 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:06:29.704424 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:06:29.704433 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:06:29.704443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:06:29.704453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:06:29.704463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:06:29.704474 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:06:29.704483 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:06:29.704493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:29.704502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:29.704512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:06:29.704524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:06:29.704535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:06:29.704545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:06:29.704557 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:06:29.704568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:29.704579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:29.704589 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:06:29.704600 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:06:29.704610 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:06:29.704620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:06:29.704632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:29.704642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:06:29.704652 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:06:29.704701 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:06:29.704712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:06:29.704723 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:06:29.704736 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:06:29.704747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:29.704759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:29.704769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:29.704780 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:06:29.704790 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:06:29.704800 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:06:29.704812 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:06:29.704822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:29.704832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:06:29.704842 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:06:29.704852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:06:29.704863 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:06:29.704873 systemd[1]: Reached target machines.target - Containers. Dec 16 13:06:29.704883 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:06:29.704893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:29.704905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:06:29.704914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:06:29.704922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:29.704931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:29.704940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:29.704949 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:06:29.704959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:29.704970 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:06:29.704982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:06:29.704992 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:06:29.705000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:06:29.705007 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:06:29.705016 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:29.705023 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:06:29.705031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:06:29.705040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:06:29.705050 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:06:29.705058 kernel: loop: module loaded Dec 16 13:06:29.705066 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:06:29.705116 systemd-journald[1261]: Collecting audit messages is disabled. Dec 16 13:06:29.705139 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:06:29.705148 kernel: fuse: init (API version 7.41) Dec 16 13:06:29.705156 systemd-journald[1261]: Journal started Dec 16 13:06:29.705177 systemd-journald[1261]: Runtime Journal (/run/log/journal/dd617fd97d5147579a22c771470997d7) is 8M, max 158.6M, 150.6M free. Dec 16 13:06:29.316193 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:06:29.328172 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:06:29.328491 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:06:29.712725 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:06:29.712832 systemd[1]: Stopped verity-setup.service. Dec 16 13:06:29.719687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:29.724894 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:06:29.727307 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:06:29.728764 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:06:29.730243 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:06:29.732784 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:06:29.734190 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:06:29.736791 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:06:29.738297 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:06:29.740366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:29.742134 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:06:29.742290 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:06:29.744544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:29.744728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:29.746318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:29.746445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:29.750881 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:06:29.751025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:06:29.753864 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:29.753996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:29.756940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:29.759367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:29.762458 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:06:29.770863 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:06:29.772978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:06:29.776274 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:06:29.778445 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:06:29.778478 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:06:29.783777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:06:29.789808 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:06:29.792157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:29.793826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:06:29.798609 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:06:29.801969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:29.806804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:06:29.809101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:29.810777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:29.814815 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:06:29.818822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:06:29.825714 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:06:29.825947 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:06:29.826152 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:06:29.852345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:06:29.856123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:06:29.865030 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:06:29.874594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:29.877578 systemd-journald[1261]: Time spent on flushing to /var/log/journal/dd617fd97d5147579a22c771470997d7 is 14.573ms for 994 entries. Dec 16 13:06:29.877578 systemd-journald[1261]: System Journal (/var/log/journal/dd617fd97d5147579a22c771470997d7) is 8M, max 2.6G, 2.6G free. Dec 16 13:06:29.958424 systemd-journald[1261]: Received client request to flush runtime journal. Dec 16 13:06:29.958468 kernel: ACPI: bus type drm_connector registered Dec 16 13:06:29.958489 kernel: loop0: detected capacity change from 0 to 27936 Dec 16 13:06:29.889713 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:29.889870 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:29.946610 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 16 13:06:29.946623 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 16 13:06:29.949231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:06:29.954576 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:06:29.963879 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:06:29.968383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:30.012061 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:06:30.135276 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:06:30.139658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:06:30.156023 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Dec 16 13:06:30.156040 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Dec 16 13:06:30.158253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:30.329984 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:06:30.341693 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:06:30.416687 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:06:30.459122 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:06:30.464006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:30.492925 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Dec 16 13:06:30.719484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:30.724989 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:06:30.804458 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:06:30.830630 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:06:30.883707 kernel: loop2: detected capacity change from 0 to 224512 Dec 16 13:06:30.893748 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:06:30.895703 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:06:30.900685 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:06:30.903450 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:06:30.904936 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:06:30.908199 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:06:30.908244 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:06:30.913757 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:06:30.919743 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:06:30.936678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:06:30.966676 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:06:31.027882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:31.042363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:31.042550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:31.047882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:31.078286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:31.078904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:31.084129 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:06:31.087070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:31.329254 systemd-networkd[1352]: lo: Link UP Dec 16 13:06:31.329536 systemd-networkd[1352]: lo: Gained carrier Dec 16 13:06:31.331325 systemd-networkd[1352]: Enumeration completed Dec 16 13:06:31.331619 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:31.331628 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:31.331790 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:06:31.333548 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:06:31.340135 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:06:31.336770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:06:31.344697 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:06:31.346693 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd9a677e eth0: Data path switched to VF: enP30832s1 Dec 16 13:06:31.347889 systemd-networkd[1352]: enP30832s1: Link UP Dec 16 13:06:31.347965 systemd-networkd[1352]: eth0: Link UP Dec 16 13:06:31.347968 systemd-networkd[1352]: eth0: Gained carrier Dec 16 13:06:31.347982 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:31.354192 systemd-networkd[1352]: enP30832s1: Gained carrier Dec 16 13:06:31.361762 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.0.12/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:06:31.398234 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:06:31.410685 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:06:31.414545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:06:31.417767 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:06:31.451715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:06:31.521692 kernel: loop4: detected capacity change from 0 to 27936 Dec 16 13:06:31.533678 kernel: loop5: detected capacity change from 0 to 128560 Dec 16 13:06:31.542686 kernel: loop6: detected capacity change from 0 to 224512 Dec 16 13:06:31.555685 kernel: loop7: detected capacity change from 0 to 110984 Dec 16 13:06:31.581809 (sd-merge)[1444]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 13:06:31.582140 (sd-merge)[1444]: Merged extensions into '/usr'. Dec 16 13:06:31.585114 systemd[1]: Reload requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:06:31.585127 systemd[1]: Reloading... Dec 16 13:06:31.633750 zram_generator::config[1470]: No configuration found. Dec 16 13:06:31.827765 systemd[1]: Reloading finished in 242 ms. Dec 16 13:06:31.844358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:06:31.846371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:31.859008 systemd[1]: Starting ensure-sysext.service... Dec 16 13:06:31.863304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:06:31.875897 systemd[1]: Reload requested from client PID 1535 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:06:31.875916 systemd[1]: Reloading... Dec 16 13:06:31.880514 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:06:31.880806 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:06:31.881090 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:06:31.881363 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:06:31.882084 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:06:31.882375 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Dec 16 13:06:31.882466 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Dec 16 13:06:31.902269 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:31.902282 systemd-tmpfiles[1536]: Skipping /boot Dec 16 13:06:31.919695 zram_generator::config[1563]: No configuration found. Dec 16 13:06:31.922233 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:31.922248 systemd-tmpfiles[1536]: Skipping /boot Dec 16 13:06:32.106655 systemd[1]: Reloading finished in 230 ms. Dec 16 13:06:32.131195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:32.140781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.141727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:06:32.145902 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:06:32.147632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:32.150721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:32.153646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:32.157917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:32.160243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:32.160438 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:32.162247 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:06:32.172747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:06:32.176909 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:06:32.178939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.182084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:32.182234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:32.185053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:32.185206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:32.190160 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:32.190314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:32.199626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.199902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:32.204928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:32.207936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:32.212838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:32.215797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:32.215979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:32.216125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.224968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:06:32.229454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:32.229614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:32.233481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:32.233635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:32.239111 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:32.239393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:32.250109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.250371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:32.252754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:32.255911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:32.261924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:32.264979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:32.267103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:32.267221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:32.267409 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:06:32.269389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:32.273178 systemd[1]: Finished ensure-sysext.service. Dec 16 13:06:32.279797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:32.280045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:32.285528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:32.285745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:32.288157 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:32.288372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:32.292084 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:32.292485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:32.296349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:32.296903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:32.303069 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:06:32.331654 systemd-resolved[1633]: Positive Trust Anchors: Dec 16 13:06:32.331681 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:06:32.331716 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:06:32.349815 systemd-resolved[1633]: Using system hostname 'ci-4459.2.2-a-ace8908665'. Dec 16 13:06:32.350934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:06:32.352420 systemd[1]: Reached target network.target - Network. Dec 16 13:06:32.353762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:32.370974 augenrules[1675]: No rules Dec 16 13:06:32.372048 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:06:32.372239 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:06:32.442801 systemd-networkd[1352]: eth0: Gained IPv6LL Dec 16 13:06:32.444604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:06:32.448876 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:06:33.238808 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:06:33.241936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:06:36.265653 ldconfig[1312]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:06:36.276194 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:06:36.281850 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:06:36.298166 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:06:36.299756 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:06:36.302791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:06:36.305725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:06:36.308722 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:06:36.310365 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:06:36.311773 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:06:36.314719 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:06:36.317736 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:06:36.317767 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:06:36.320704 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:06:36.324583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:06:36.326981 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:06:36.329912 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:06:36.331814 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:06:36.334745 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:06:36.343068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:06:36.346220 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:06:36.350219 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:06:36.353359 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:06:36.356705 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:06:36.359735 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:36.359757 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:36.361440 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:06:36.363333 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:06:36.367820 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:06:36.370858 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:06:36.380819 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:06:36.384072 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:06:36.389760 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:06:36.391208 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:06:36.395302 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:06:36.397759 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:06:36.399187 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:06:36.401325 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:06:36.403118 jq[1696]: false Dec 16 13:06:36.403648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:36.410834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:06:36.418620 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:06:36.419704 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:06:36.425723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:06:36.433320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:06:36.437844 KVP[1699]: KVP starting; pid is:1699 Dec 16 13:06:36.442180 KVP[1699]: KVP LIC Version: 3.1 Dec 16 13:06:36.442694 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:06:36.442926 extend-filesystems[1697]: Found /dev/nvme0n1p6 Dec 16 13:06:36.442859 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:06:36.447541 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:06:36.448454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:06:36.450633 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:06:36.459110 oslogin_cache_refresh[1698]: Refreshing passwd entry cache Dec 16 13:06:36.459897 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Refreshing passwd entry cache Dec 16 13:06:36.461806 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:06:36.469691 extend-filesystems[1697]: Found /dev/nvme0n1p9 Dec 16 13:06:36.472718 extend-filesystems[1697]: Checking size of /dev/nvme0n1p9 Dec 16 13:06:36.470221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:06:36.477028 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:06:36.477251 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:06:36.482252 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:06:36.482523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:06:36.488009 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Failure getting users, quitting Dec 16 13:06:36.488009 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:36.488009 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Refreshing group entry cache Dec 16 13:06:36.487867 oslogin_cache_refresh[1698]: Failure getting users, quitting Dec 16 13:06:36.487883 oslogin_cache_refresh[1698]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:36.487922 oslogin_cache_refresh[1698]: Refreshing group entry cache Dec 16 13:06:36.502547 jq[1712]: true Dec 16 13:06:36.517622 chronyd[1688]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:06:36.518832 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Failure getting groups, quitting Dec 16 13:06:36.518832 google_oslogin_nss_cache[1698]: oslogin_cache_refresh[1698]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:36.518828 oslogin_cache_refresh[1698]: Failure getting groups, quitting Dec 16 13:06:36.518838 oslogin_cache_refresh[1698]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:36.523239 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:06:36.523446 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:06:36.525968 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:06:36.529129 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:06:36.532010 (ntainerd)[1739]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:06:36.540991 extend-filesystems[1697]: Old size kept for /dev/nvme0n1p9 Dec 16 13:06:36.538805 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:06:36.539053 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:06:36.548444 jq[1737]: true Dec 16 13:06:36.558355 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:06:36.561888 chronyd[1688]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:06:36.562024 chronyd[1688]: Loaded seccomp filter (level 2) Dec 16 13:06:36.568589 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:06:36.593018 tar[1720]: linux-amd64/LICENSE Dec 16 13:06:36.593197 tar[1720]: linux-amd64/helm Dec 16 13:06:36.599442 update_engine[1711]: I20251216 13:06:36.599130 1711 main.cc:92] Flatcar Update Engine starting Dec 16 13:06:36.642834 systemd-logind[1709]: New seat seat0. Dec 16 13:06:36.667848 dbus-daemon[1691]: [system] SELinux support is enabled Dec 16 13:06:36.668184 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:06:36.670972 bash[1768]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:06:36.712034 update_engine[1711]: I20251216 13:06:36.711992 1711 update_check_scheduler.cc:74] Next update check in 11m37s Dec 16 13:06:36.717736 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:06:36.721616 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:06:36.727292 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:06:36.739910 sshd_keygen[1745]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:06:36.732816 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:06:36.732890 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:06:36.732912 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:06:36.735503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:06:36.735523 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:06:36.738135 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:06:36.741559 dbus-daemon[1691]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:06:36.745631 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:06:36.809716 coreos-metadata[1690]: Dec 16 13:06:36.808 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:06:36.813342 coreos-metadata[1690]: Dec 16 13:06:36.813 INFO Fetch successful Dec 16 13:06:36.814320 coreos-metadata[1690]: Dec 16 13:06:36.813 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:06:36.816785 coreos-metadata[1690]: Dec 16 13:06:36.816 INFO Fetch successful Dec 16 13:06:36.820144 coreos-metadata[1690]: Dec 16 13:06:36.820 INFO Fetching http://168.63.129.16/machine/a46cc643-63df-4bcc-8ce9-1df66764890b/49ea00e9%2D2159%2D4db7%2Da605%2Dc4d2b1ff32bf.%5Fci%2D4459.2.2%2Da%2Dace8908665?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:06:36.821811 coreos-metadata[1690]: Dec 16 13:06:36.821 INFO Fetch successful Dec 16 13:06:36.821861 coreos-metadata[1690]: Dec 16 13:06:36.821 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:06:36.826855 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:06:36.833273 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:06:36.839812 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:06:36.843217 coreos-metadata[1690]: Dec 16 13:06:36.841 INFO Fetch successful Dec 16 13:06:36.875934 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:06:36.876112 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:06:36.880947 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:06:36.893108 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:06:36.897449 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:06:36.903147 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:06:36.918063 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:06:36.925354 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:06:36.933884 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:06:36.936204 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:06:36.953530 locksmithd[1805]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:06:37.153799 tar[1720]: linux-amd64/README.md Dec 16 13:06:37.172073 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:06:37.597064 containerd[1739]: time="2025-12-16T13:06:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:06:37.598897 containerd[1739]: time="2025-12-16T13:06:37.598859840Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607090573Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.72µs" Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607122164Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607140029Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607290486Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607303349Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607324415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607383522Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607396777Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607645293Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607653373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607680693Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:37.607702 containerd[1739]: time="2025-12-16T13:06:37.607689218Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.607746469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.607921926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.607954530Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.607966410Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.608002504Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.608243030Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:06:37.608606 containerd[1739]: time="2025-12-16T13:06:37.608294671Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622331702Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622401521Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622422452Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622435406Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622448666Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622464300Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622479058Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622490678Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622500729Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622511345Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622520510Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622533647Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622640224Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:06:37.622871 containerd[1739]: time="2025-12-16T13:06:37.622656351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622685249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622697095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622710369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622721387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622736052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622746771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622758953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622769024Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622783751Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622832146Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622845331Z" level=info msg="Start snapshots syncer" Dec 16 13:06:37.623190 containerd[1739]: time="2025-12-16T13:06:37.622863497Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:06:37.623418 containerd[1739]: time="2025-12-16T13:06:37.623097847Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:06:37.623418 containerd[1739]: time="2025-12-16T13:06:37.623144347Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623176667Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623273803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623291578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623304304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623315669Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623327512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623337947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623348707Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623369828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623385124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623395937Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623423216Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623436975Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:37.623544 containerd[1739]: time="2025-12-16T13:06:37.623445398Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623455302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623463000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623471965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623487520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623501654Z" level=info msg="runtime interface created" Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623508138Z" level=info msg="created NRI interface" Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623516660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623526681Z" level=info msg="Connect containerd service" Dec 16 13:06:37.625359 containerd[1739]: time="2025-12-16T13:06:37.623543859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:06:37.626012 containerd[1739]: time="2025-12-16T13:06:37.625785692Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:06:37.680969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:37.685210 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117335047Z" level=info msg="Start subscribing containerd event" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117389612Z" level=info msg="Start recovering state" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117486939Z" level=info msg="Start event monitor" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117497979Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117504397Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117505810Z" level=info msg="Start streaming server" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117531538Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117539345Z" level=info msg="runtime interface starting up..." Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117544550Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117545715Z" level=info msg="starting plugins..." Dec 16 13:06:38.119592 containerd[1739]: time="2025-12-16T13:06:38.117600170Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:06:38.120859 containerd[1739]: time="2025-12-16T13:06:38.120817399Z" level=info msg="containerd successfully booted in 0.524344s" Dec 16 13:06:38.121024 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:06:38.123947 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:06:38.126805 systemd[1]: Startup finished in 3.068s (kernel) + 11.617s (initrd) + 11.548s (userspace) = 26.235s. Dec 16 13:06:38.258541 kubelet[1851]: E1216 13:06:38.258485 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:38.260259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:38.260360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:38.260640 systemd[1]: kubelet.service: Consumed 932ms CPU time, 265.9M memory peak. Dec 16 13:06:38.410789 login[1831]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Dec 16 13:06:38.411582 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:06:38.424160 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:06:38.426877 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:06:38.433773 systemd-logind[1709]: New session 2 of user core. Dec 16 13:06:38.463676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:06:38.465548 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:06:38.476700 (systemd)[1874]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:06:38.478245 systemd-logind[1709]: New session c1 of user core. Dec 16 13:06:38.659900 waagent[1827]: 2025-12-16T13:06:38.659841Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:06:38.662209 waagent[1827]: 2025-12-16T13:06:38.662097Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 13:06:38.663925 waagent[1827]: 2025-12-16T13:06:38.663851Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:06:38.667386 waagent[1827]: 2025-12-16T13:06:38.665801Z INFO Daemon Daemon Run daemon Dec 16 13:06:38.667793 waagent[1827]: 2025-12-16T13:06:38.667758Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 13:06:38.678679 waagent[1827]: 2025-12-16T13:06:38.670723Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:06:38.678679 waagent[1827]: 2025-12-16T13:06:38.672864Z INFO Daemon Daemon Activate resource disk Dec 16 13:06:38.678679 waagent[1827]: 2025-12-16T13:06:38.674098Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:06:38.679388 waagent[1827]: 2025-12-16T13:06:38.679355Z INFO Daemon Daemon Found device: None Dec 16 13:06:38.681172 waagent[1827]: 2025-12-16T13:06:38.681138Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:06:38.683961 waagent[1827]: 2025-12-16T13:06:38.683920Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:06:38.684574 systemd[1874]: Queued start job for default target default.target. Dec 16 13:06:38.686563 waagent[1827]: 2025-12-16T13:06:38.686523Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:06:38.688294 waagent[1827]: 2025-12-16T13:06:38.687492Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:06:38.689837 systemd[1874]: Created slice app.slice - User Application Slice. Dec 16 13:06:38.689858 systemd[1874]: Reached target paths.target - Paths. Dec 16 13:06:38.689889 systemd[1874]: Reached target timers.target - Timers. Dec 16 13:06:38.692815 systemd[1874]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:06:38.695167 waagent[1827]: 2025-12-16T13:06:38.695120Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:06:38.698314 waagent[1827]: 2025-12-16T13:06:38.698278Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:06:38.699687 waagent[1827]: 2025-12-16T13:06:38.698423Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:06:38.699687 waagent[1827]: 2025-12-16T13:06:38.698632Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:06:38.709560 systemd[1874]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:06:38.709649 systemd[1874]: Reached target sockets.target - Sockets. Dec 16 13:06:38.709704 systemd[1874]: Reached target basic.target - Basic System. Dec 16 13:06:38.709760 systemd[1874]: Reached target default.target - Main User Target. Dec 16 13:06:38.709780 systemd[1874]: Startup finished in 227ms. Dec 16 13:06:38.710282 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:06:38.713770 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:06:38.751686 waagent[1827]: 2025-12-16T13:06:38.750168Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:06:38.778878 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:06:38.780268 waagent[1827]: 2025-12-16T13:06:38.780224Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:06:38.780621 waagent[1827]: 2025-12-16T13:06:38.780378Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:06:38.780621 waagent[1827]: 2025-12-16T13:06:38.780656Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:06:38.780621 waagent[1827]: 2025-12-16T13:06:38.780951Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:06:38.780621 waagent[1827]: 2025-12-16T13:06:38.781095Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:06:38.780621 waagent[1827]: 2025-12-16T13:06:38.781268Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:06:38.793531 waagent[1827]: 2025-12-16T13:06:38.793494Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:06:38.795055 waagent[1827]: 2025-12-16T13:06:38.793802Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:06:38.795055 waagent[1827]: 2025-12-16T13:06:38.794019Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:06:38.902248 waagent[1827]: 2025-12-16T13:06:38.902195Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:06:38.903255 waagent[1827]: 2025-12-16T13:06:38.903175Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:06:38.931284 waagent[1827]: 2025-12-16T13:06:38.931216Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:06:38.945954 waagent[1827]: 2025-12-16T13:06:38.945925Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:06:38.947412 waagent[1827]: 2025-12-16T13:06:38.947375Z INFO Daemon Dec 16 13:06:38.948176 waagent[1827]: 2025-12-16T13:06:38.947927Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a4541b05-e5e3-41a8-98a9-da59f1f6100b eTag: 8471011647515379911 source: Fabric] Dec 16 13:06:38.950571 waagent[1827]: 2025-12-16T13:06:38.950536Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:06:38.952082 waagent[1827]: 2025-12-16T13:06:38.952052Z INFO Daemon Dec 16 13:06:38.952962 waagent[1827]: 2025-12-16T13:06:38.952893Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:06:38.960684 waagent[1827]: 2025-12-16T13:06:38.960650Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:06:39.027395 waagent[1827]: 2025-12-16T13:06:39.027353Z INFO Daemon Downloaded certificate {'thumbprint': '19B43CF5D33DF3B29315D443002B3103E941229D', 'hasPrivateKey': True} Dec 16 13:06:39.029522 waagent[1827]: 2025-12-16T13:06:39.029492Z INFO Daemon Fetch goal state completed Dec 16 13:06:39.040288 waagent[1827]: 2025-12-16T13:06:39.040228Z INFO Daemon Daemon Starting provisioning Dec 16 13:06:39.041330 waagent[1827]: 2025-12-16T13:06:39.041297Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:06:39.041954 waagent[1827]: 2025-12-16T13:06:39.041929Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-ace8908665] Dec 16 13:06:39.060183 waagent[1827]: 2025-12-16T13:06:39.060147Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-ace8908665] Dec 16 13:06:39.061317 waagent[1827]: 2025-12-16T13:06:39.060393Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:06:39.061317 waagent[1827]: 2025-12-16T13:06:39.060577Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:06:39.068794 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:06:39.068800 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:39.068822 systemd-networkd[1352]: eth0: DHCP lease lost Dec 16 13:06:39.069591 waagent[1827]: 2025-12-16T13:06:39.069550Z INFO Daemon Daemon Create user account if not exists Dec 16 13:06:39.070904 waagent[1827]: 2025-12-16T13:06:39.069886Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:06:39.070904 waagent[1827]: 2025-12-16T13:06:39.070328Z INFO Daemon Daemon Configure sudoer Dec 16 13:06:39.074373 waagent[1827]: 2025-12-16T13:06:39.074326Z INFO Daemon Daemon Configure sshd Dec 16 13:06:39.079386 waagent[1827]: 2025-12-16T13:06:39.079347Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:06:39.081850 waagent[1827]: 2025-12-16T13:06:39.081140Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:06:39.100736 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.0.12/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:06:39.412494 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:06:39.416326 systemd-logind[1709]: New session 1 of user core. Dec 16 13:06:39.421790 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:06:40.156686 waagent[1827]: 2025-12-16T13:06:40.156630Z INFO Daemon Daemon Provisioning complete Dec 16 13:06:40.168900 waagent[1827]: 2025-12-16T13:06:40.168864Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:06:40.170598 waagent[1827]: 2025-12-16T13:06:40.169099Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:06:40.170598 waagent[1827]: 2025-12-16T13:06:40.169353Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:06:40.270341 waagent[1924]: 2025-12-16T13:06:40.270275Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:06:40.270587 waagent[1924]: 2025-12-16T13:06:40.270362Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 13:06:40.270587 waagent[1924]: 2025-12-16T13:06:40.270398Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:06:40.270587 waagent[1924]: 2025-12-16T13:06:40.270434Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:06:40.311005 waagent[1924]: 2025-12-16T13:06:40.310956Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:06:40.311143 waagent[1924]: 2025-12-16T13:06:40.311117Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:06:40.311199 waagent[1924]: 2025-12-16T13:06:40.311170Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:06:40.327680 waagent[1924]: 2025-12-16T13:06:40.327622Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:06:40.334586 waagent[1924]: 2025-12-16T13:06:40.334555Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:06:40.334939 waagent[1924]: 2025-12-16T13:06:40.334909Z INFO ExtHandler Dec 16 13:06:40.334982 waagent[1924]: 2025-12-16T13:06:40.334966Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4c86c1be-8a52-4597-8f91-e81d4148bce9 eTag: 8471011647515379911 source: Fabric] Dec 16 13:06:40.335194 waagent[1924]: 2025-12-16T13:06:40.335169Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:06:40.335528 waagent[1924]: 2025-12-16T13:06:40.335503Z INFO ExtHandler Dec 16 13:06:40.335563 waagent[1924]: 2025-12-16T13:06:40.335543Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:06:40.343332 waagent[1924]: 2025-12-16T13:06:40.343301Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:06:40.408683 waagent[1924]: 2025-12-16T13:06:40.408581Z INFO ExtHandler Downloaded certificate {'thumbprint': '19B43CF5D33DF3B29315D443002B3103E941229D', 'hasPrivateKey': True} Dec 16 13:06:40.409000 waagent[1924]: 2025-12-16T13:06:40.408968Z INFO ExtHandler Fetch goal state completed Dec 16 13:06:40.422683 waagent[1924]: 2025-12-16T13:06:40.422628Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 13:06:40.426654 waagent[1924]: 2025-12-16T13:06:40.426615Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1924 Dec 16 13:06:40.426797 waagent[1924]: 2025-12-16T13:06:40.426772Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:06:40.427027 waagent[1924]: 2025-12-16T13:06:40.427005Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:06:40.428168 waagent[1924]: 2025-12-16T13:06:40.428135Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:06:40.428470 waagent[1924]: 2025-12-16T13:06:40.428443Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:06:40.428581 waagent[1924]: 2025-12-16T13:06:40.428560Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:06:40.429011 waagent[1924]: 2025-12-16T13:06:40.428981Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:06:40.463223 waagent[1924]: 2025-12-16T13:06:40.463197Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:06:40.463345 waagent[1924]: 2025-12-16T13:06:40.463323Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:06:40.468681 waagent[1924]: 2025-12-16T13:06:40.468570Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:06:40.473652 systemd[1]: Reload requested from client PID 1939 ('systemctl') (unit waagent.service)... Dec 16 13:06:40.473681 systemd[1]: Reloading... Dec 16 13:06:40.545718 zram_generator::config[1981]: No configuration found. Dec 16 13:06:40.707924 systemd[1]: Reloading finished in 233 ms. Dec 16 13:06:40.719689 waagent[1924]: 2025-12-16T13:06:40.716885Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:06:40.719689 waagent[1924]: 2025-12-16T13:06:40.717026Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:06:40.828304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#91 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:06:41.247765 waagent[1924]: 2025-12-16T13:06:41.247686Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:06:41.248039 waagent[1924]: 2025-12-16T13:06:41.248013Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:06:41.248723 waagent[1924]: 2025-12-16T13:06:41.248592Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:06:41.248893 waagent[1924]: 2025-12-16T13:06:41.248866Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:06:41.248956 waagent[1924]: 2025-12-16T13:06:41.248935Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:06:41.249128 waagent[1924]: 2025-12-16T13:06:41.249107Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:06:41.249399 waagent[1924]: 2025-12-16T13:06:41.249374Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:06:41.249607 waagent[1924]: 2025-12-16T13:06:41.249566Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:06:41.249645 waagent[1924]: 2025-12-16T13:06:41.249614Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:06:41.250013 waagent[1924]: 2025-12-16T13:06:41.249954Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:06:41.250087 waagent[1924]: 2025-12-16T13:06:41.250050Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:06:41.250087 waagent[1924]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:06:41.250087 waagent[1924]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:06:41.250087 waagent[1924]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:06:41.250087 waagent[1924]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:06:41.250087 waagent[1924]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:06:41.250087 waagent[1924]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:06:41.250411 waagent[1924]: 2025-12-16T13:06:41.250382Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:06:41.250555 waagent[1924]: 2025-12-16T13:06:41.250518Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:06:41.251017 waagent[1924]: 2025-12-16T13:06:41.250982Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:06:41.251063 waagent[1924]: 2025-12-16T13:06:41.251050Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:06:41.251567 waagent[1924]: 2025-12-16T13:06:41.251542Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:06:41.251903 waagent[1924]: 2025-12-16T13:06:41.251873Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:06:41.252763 waagent[1924]: 2025-12-16T13:06:41.252739Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:06:41.260434 waagent[1924]: 2025-12-16T13:06:41.260399Z INFO ExtHandler ExtHandler Dec 16 13:06:41.260495 waagent[1924]: 2025-12-16T13:06:41.260458Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 40f09baf-4bac-49b1-8fab-0407756cacab correlation 5bce89c4-dcc5-4caa-b441-76934e1b8f47 created: 2025-12-16T13:05:40.883186Z] Dec 16 13:06:41.260766 waagent[1924]: 2025-12-16T13:06:41.260742Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:06:41.261115 waagent[1924]: 2025-12-16T13:06:41.261094Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:06:41.293735 waagent[1924]: 2025-12-16T13:06:41.293694Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:06:41.293735 waagent[1924]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:06:41.294054 waagent[1924]: 2025-12-16T13:06:41.294013Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CD9CCCAE-D62D-4B80-9A90-145304878080;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:06:41.316658 waagent[1924]: 2025-12-16T13:06:41.316615Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:06:41.316658 waagent[1924]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:06:41.316658 waagent[1924]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:06:41.316658 waagent[1924]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:9a:67:7e brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 16 13:06:41.316658 waagent[1924]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:9a:67:7e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:06:41.316658 waagent[1924]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:06:41.316658 waagent[1924]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:06:41.316658 waagent[1924]: 2: eth0 inet 10.200.0.12/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:06:41.316658 waagent[1924]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:06:41.316658 waagent[1924]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:06:41.316658 waagent[1924]: 2: eth0 inet6 fe80::6245:bdff:fe9a:677e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:06:41.345524 waagent[1924]: 2025-12-16T13:06:41.345477Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:06:41.345524 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:06:41.345524 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.345524 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:06:41.345524 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.345524 waagent[1924]: Chain OUTPUT (policy ACCEPT 7 packets, 940 bytes) Dec 16 13:06:41.345524 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.345524 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:06:41.345524 waagent[1924]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:06:41.345524 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:06:41.348116 waagent[1924]: 2025-12-16T13:06:41.348075Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:06:41.348116 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:06:41.348116 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.348116 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:06:41.348116 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.348116 waagent[1924]: Chain OUTPUT (policy ACCEPT 7 packets, 940 bytes) Dec 16 13:06:41.348116 waagent[1924]: pkts bytes target prot opt in out source destination Dec 16 13:06:41.348116 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:06:41.348116 waagent[1924]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:06:41.348116 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:06:48.511148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:06:48.512476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:49.043648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:49.047963 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:49.080700 kubelet[2076]: E1216 13:06:49.080648 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:49.083483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:49.083603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:49.083907 systemd[1]: kubelet.service: Consumed 126ms CPU time, 108.5M memory peak. Dec 16 13:06:57.258006 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:06:57.259042 systemd[1]: Started sshd@0-10.200.0.12:22-10.200.16.10:40324.service - OpenSSH per-connection server daemon (10.200.16.10:40324). Dec 16 13:06:57.922511 sshd[2084]: Accepted publickey for core from 10.200.16.10 port 40324 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:57.923517 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:57.927581 systemd-logind[1709]: New session 3 of user core. Dec 16 13:06:57.933816 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:06:58.409391 systemd[1]: Started sshd@1-10.200.0.12:22-10.200.16.10:40326.service - OpenSSH per-connection server daemon (10.200.16.10:40326). Dec 16 13:06:58.958715 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 40326 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:06:58.959778 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:58.963158 systemd-logind[1709]: New session 4 of user core. Dec 16 13:06:58.972797 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:06:59.149075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:06:59.150348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:59.346627 sshd[2093]: Connection closed by 10.200.16.10 port 40326 Dec 16 13:06:59.347334 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:59.350608 systemd[1]: sshd@1-10.200.0.12:22-10.200.16.10:40326.service: Deactivated successfully. Dec 16 13:06:59.351964 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:06:59.352606 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:06:59.353599 systemd-logind[1709]: Removed session 4. Dec 16 13:06:59.442046 systemd[1]: Started sshd@2-10.200.0.12:22-10.200.16.10:40332.service - OpenSSH per-connection server daemon (10.200.16.10:40332). Dec 16 13:06:59.631373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:06:59.634008 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:06:59.669181 kubelet[2110]: E1216 13:06:59.669152 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:06:59.670610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:06:59.670748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:06:59.671011 systemd[1]: kubelet.service: Consumed 123ms CPU time, 108.6M memory peak. Dec 16 13:07:00.001192 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 40332 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:00.002258 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:00.006569 systemd-logind[1709]: New session 5 of user core. Dec 16 13:07:00.013805 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:07:00.346267 chronyd[1688]: Selected source PHC0 Dec 16 13:07:00.389121 sshd[2118]: Connection closed by 10.200.16.10 port 40332 Dec 16 13:07:00.389572 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:00.392798 systemd[1]: sshd@2-10.200.0.12:22-10.200.16.10:40332.service: Deactivated successfully. Dec 16 13:07:00.394220 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:07:00.395007 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:07:00.396086 systemd-logind[1709]: Removed session 5. Dec 16 13:07:00.490011 systemd[1]: Started sshd@3-10.200.0.12:22-10.200.16.10:37768.service - OpenSSH per-connection server daemon (10.200.16.10:37768). Dec 16 13:07:01.045810 sshd[2124]: Accepted publickey for core from 10.200.16.10 port 37768 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:01.046872 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:01.051001 systemd-logind[1709]: New session 6 of user core. Dec 16 13:07:01.057823 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:07:01.436178 sshd[2127]: Connection closed by 10.200.16.10 port 37768 Dec 16 13:07:01.436657 sshd-session[2124]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:01.439659 systemd[1]: sshd@3-10.200.0.12:22-10.200.16.10:37768.service: Deactivated successfully. Dec 16 13:07:01.441110 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:07:01.441791 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:07:01.443006 systemd-logind[1709]: Removed session 6. Dec 16 13:07:01.532737 systemd[1]: Started sshd@4-10.200.0.12:22-10.200.16.10:37772.service - OpenSSH per-connection server daemon (10.200.16.10:37772). Dec 16 13:07:02.082249 sshd[2133]: Accepted publickey for core from 10.200.16.10 port 37772 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:02.083302 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:02.087199 systemd-logind[1709]: New session 7 of user core. Dec 16 13:07:02.092805 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:07:02.521226 sudo[2137]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:07:02.521448 sudo[2137]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:02.901346 sudo[2137]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:02.988107 sshd[2136]: Connection closed by 10.200.16.10 port 37772 Dec 16 13:07:02.988827 sshd-session[2133]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:02.991730 systemd[1]: sshd@4-10.200.0.12:22-10.200.16.10:37772.service: Deactivated successfully. Dec 16 13:07:02.993107 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:07:02.994686 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:07:02.995528 systemd-logind[1709]: Removed session 7. Dec 16 13:07:03.094164 systemd[1]: Started sshd@5-10.200.0.12:22-10.200.16.10:37788.service - OpenSSH per-connection server daemon (10.200.16.10:37788). Dec 16 13:07:03.643435 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 37788 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:03.644521 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:03.648948 systemd-logind[1709]: New session 8 of user core. Dec 16 13:07:03.655824 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:07:03.947706 sudo[2148]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:07:03.947923 sudo[2148]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:03.953907 sudo[2148]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:03.957696 sudo[2147]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:07:03.957894 sudo[2147]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:03.964821 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:07:03.994927 augenrules[2170]: No rules Dec 16 13:07:03.995380 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:07:03.995526 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:07:03.996626 sudo[2147]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:04.083321 sshd[2146]: Connection closed by 10.200.16.10 port 37788 Dec 16 13:07:04.083703 sshd-session[2143]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:04.086416 systemd[1]: sshd@5-10.200.0.12:22-10.200.16.10:37788.service: Deactivated successfully. Dec 16 13:07:04.087636 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:07:04.088246 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:07:04.089122 systemd-logind[1709]: Removed session 8. Dec 16 13:07:04.184926 systemd[1]: Started sshd@6-10.200.0.12:22-10.200.16.10:37804.service - OpenSSH per-connection server daemon (10.200.16.10:37804). Dec 16 13:07:04.743824 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 37804 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:04.744874 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:04.748958 systemd-logind[1709]: New session 9 of user core. Dec 16 13:07:04.758793 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:07:05.047902 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:07:05.048121 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:06.971701 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:07:06.983946 (dockerd)[2200]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:07:09.178704 dockerd[2200]: time="2025-12-16T13:07:09.178575601Z" level=info msg="Starting up" Dec 16 13:07:09.180555 dockerd[2200]: time="2025-12-16T13:07:09.180523841Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:07:09.189572 dockerd[2200]: time="2025-12-16T13:07:09.189537364Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:07:09.436855 dockerd[2200]: time="2025-12-16T13:07:09.436651252Z" level=info msg="Loading containers: start." Dec 16 13:07:09.492682 kernel: Initializing XFRM netlink socket Dec 16 13:07:09.898970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:07:09.900730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:10.570243 systemd-networkd[1352]: docker0: Link UP Dec 16 13:07:10.619720 dockerd[2200]: time="2025-12-16T13:07:10.619686865Z" level=info msg="Loading containers: done." Dec 16 13:07:10.677128 dockerd[2200]: time="2025-12-16T13:07:10.677088987Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:07:10.677338 dockerd[2200]: time="2025-12-16T13:07:10.677274924Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:07:10.677536 dockerd[2200]: time="2025-12-16T13:07:10.677478308Z" level=info msg="Initializing buildkit" Dec 16 13:07:10.677814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:10.683860 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:10.722347 kubelet[2383]: E1216 13:07:10.722303 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:10.723515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:10.723647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:10.724042 systemd[1]: kubelet.service: Consumed 124ms CPU time, 110M memory peak. Dec 16 13:07:10.728791 dockerd[2200]: time="2025-12-16T13:07:10.728752442Z" level=info msg="Completed buildkit initialization" Dec 16 13:07:10.735071 dockerd[2200]: time="2025-12-16T13:07:10.735030326Z" level=info msg="Daemon has completed initialization" Dec 16 13:07:10.735237 dockerd[2200]: time="2025-12-16T13:07:10.735171559Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:07:10.735238 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:07:11.821515 containerd[1739]: time="2025-12-16T13:07:11.821478152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:07:12.453750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884634178.mount: Deactivated successfully. Dec 16 13:07:13.737845 containerd[1739]: time="2025-12-16T13:07:13.737797121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.740450 containerd[1739]: time="2025-12-16T13:07:13.740426214Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29071621" Dec 16 13:07:13.743368 containerd[1739]: time="2025-12-16T13:07:13.743328461Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.755705 containerd[1739]: time="2025-12-16T13:07:13.754884516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.755705 containerd[1739]: time="2025-12-16T13:07:13.755540948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.934028516s" Dec 16 13:07:13.755705 containerd[1739]: time="2025-12-16T13:07:13.755574340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:07:13.756321 containerd[1739]: time="2025-12-16T13:07:13.756261283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:07:14.736160 containerd[1739]: time="2025-12-16T13:07:14.736113860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:14.738554 containerd[1739]: time="2025-12-16T13:07:14.738523794Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24991942" Dec 16 13:07:14.743649 containerd[1739]: time="2025-12-16T13:07:14.743237192Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:14.747135 containerd[1739]: time="2025-12-16T13:07:14.747110454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:14.747682 containerd[1739]: time="2025-12-16T13:07:14.747650419Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 991.24199ms" Dec 16 13:07:14.747758 containerd[1739]: time="2025-12-16T13:07:14.747747035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:07:14.748310 containerd[1739]: time="2025-12-16T13:07:14.748288778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:07:15.693273 containerd[1739]: time="2025-12-16T13:07:15.693230227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.695859 containerd[1739]: time="2025-12-16T13:07:15.695828351Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404180" Dec 16 13:07:15.698737 containerd[1739]: time="2025-12-16T13:07:15.698700525Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.702781 containerd[1739]: time="2025-12-16T13:07:15.702614554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:15.703199 containerd[1739]: time="2025-12-16T13:07:15.703177695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 954.865111ms" Dec 16 13:07:15.703235 containerd[1739]: time="2025-12-16T13:07:15.703207149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:07:15.703754 containerd[1739]: time="2025-12-16T13:07:15.703736615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:07:16.458180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851358864.mount: Deactivated successfully. Dec 16 13:07:16.816683 containerd[1739]: time="2025-12-16T13:07:16.816559637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.819131 containerd[1739]: time="2025-12-16T13:07:16.819096043Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161317" Dec 16 13:07:16.822178 containerd[1739]: time="2025-12-16T13:07:16.822134939Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.826401 containerd[1739]: time="2025-12-16T13:07:16.826365525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.826832 containerd[1739]: time="2025-12-16T13:07:16.826687760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.122853427s" Dec 16 13:07:16.826832 containerd[1739]: time="2025-12-16T13:07:16.826715587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:07:16.827185 containerd[1739]: time="2025-12-16T13:07:16.827170091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:07:17.289449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185511098.mount: Deactivated successfully. Dec 16 13:07:18.253496 containerd[1739]: time="2025-12-16T13:07:18.253439955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:18.257371 containerd[1739]: time="2025-12-16T13:07:18.257345043Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18564717" Dec 16 13:07:18.260291 containerd[1739]: time="2025-12-16T13:07:18.260266708Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:18.265265 containerd[1739]: time="2025-12-16T13:07:18.265221475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:18.266032 containerd[1739]: time="2025-12-16T13:07:18.265876144Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.438680754s" Dec 16 13:07:18.266032 containerd[1739]: time="2025-12-16T13:07:18.265907428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:07:18.266377 containerd[1739]: time="2025-12-16T13:07:18.266358628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:07:18.722333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571138368.mount: Deactivated successfully. Dec 16 13:07:18.752518 containerd[1739]: time="2025-12-16T13:07:18.752478724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:18.755643 containerd[1739]: time="2025-12-16T13:07:18.755551193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Dec 16 13:07:18.758851 containerd[1739]: time="2025-12-16T13:07:18.758826674Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:18.762711 containerd[1739]: time="2025-12-16T13:07:18.762658250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:18.763207 containerd[1739]: time="2025-12-16T13:07:18.763181032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.800076ms" Dec 16 13:07:18.763248 containerd[1739]: time="2025-12-16T13:07:18.763208567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:07:18.763799 containerd[1739]: time="2025-12-16T13:07:18.763775701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:07:19.019875 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:07:19.299435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29804109.mount: Deactivated successfully. Dec 16 13:07:20.899319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:07:20.902760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:21.426831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:21.435891 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:21.458103 containerd[1739]: time="2025-12-16T13:07:21.458061607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.462833 containerd[1739]: time="2025-12-16T13:07:21.462801506Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57681646" Dec 16 13:07:21.466928 containerd[1739]: time="2025-12-16T13:07:21.466826373Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.470427 kubelet[2618]: E1216 13:07:21.470375 2618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:21.471871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:21.471987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:21.472258 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110M memory peak. Dec 16 13:07:21.473586 containerd[1739]: time="2025-12-16T13:07:21.473264774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.474270 containerd[1739]: time="2025-12-16T13:07:21.474246448Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.710448495s" Dec 16 13:07:21.474313 containerd[1739]: time="2025-12-16T13:07:21.474273670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:07:21.904137 update_engine[1711]: I20251216 13:07:21.904093 1711 update_attempter.cc:509] Updating boot flags... Dec 16 13:07:23.424222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:23.424371 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110M memory peak. Dec 16 13:07:23.426247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:23.451393 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Dec 16 13:07:23.451403 systemd[1]: Reloading... Dec 16 13:07:23.526681 zram_generator::config[2731]: No configuration found. Dec 16 13:07:23.719192 systemd[1]: Reloading finished in 267 ms. Dec 16 13:07:23.757902 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:07:23.757975 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:07:23.758194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:23.758232 systemd[1]: kubelet.service: Consumed 62ms CPU time, 64.7M memory peak. Dec 16 13:07:23.759928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:24.279913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:24.285179 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:24.320988 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:24.320988 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:24.320988 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:24.321253 kubelet[2798]: I1216 13:07:24.321042 2798 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:24.505900 kubelet[2798]: I1216 13:07:24.505869 2798 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:07:24.505900 kubelet[2798]: I1216 13:07:24.505892 2798 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:24.506129 kubelet[2798]: I1216 13:07:24.506117 2798 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:07:24.536747 kubelet[2798]: E1216 13:07:24.536328 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:24.537563 kubelet[2798]: I1216 13:07:24.537542 2798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:24.544393 kubelet[2798]: I1216 13:07:24.544372 2798 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:24.546935 kubelet[2798]: I1216 13:07:24.546908 2798 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:07:24.548555 kubelet[2798]: I1216 13:07:24.548520 2798 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:24.548763 kubelet[2798]: I1216 13:07:24.548549 2798 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-ace8908665","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:24.548878 kubelet[2798]: I1216 13:07:24.548769 2798 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:24.548878 kubelet[2798]: I1216 13:07:24.548780 2798 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:07:24.548923 kubelet[2798]: I1216 13:07:24.548881 2798 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:24.552035 kubelet[2798]: I1216 13:07:24.552012 2798 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:07:24.552099 kubelet[2798]: I1216 13:07:24.552042 2798 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:24.552099 kubelet[2798]: I1216 13:07:24.552065 2798 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:07:24.552099 kubelet[2798]: I1216 13:07:24.552075 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:24.557274 kubelet[2798]: W1216 13:07:24.557083 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:24.557274 kubelet[2798]: E1216 13:07:24.557141 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:24.557274 kubelet[2798]: W1216 13:07:24.557207 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:24.557274 kubelet[2798]: E1216 13:07:24.557233 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:24.557415 kubelet[2798]: I1216 13:07:24.557349 2798 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:07:24.558298 kubelet[2798]: I1216 13:07:24.557761 2798 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:07:24.558298 kubelet[2798]: W1216 13:07:24.557808 2798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:07:24.559618 kubelet[2798]: I1216 13:07:24.559594 2798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:07:24.560528 kubelet[2798]: I1216 13:07:24.559627 2798 server.go:1287] "Started kubelet" Dec 16 13:07:24.568009 kubelet[2798]: E1216 13:07:24.566697 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-ace8908665.1881b400d43b6eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-ace8908665,UID:ci-4459.2.2-a-ace8908665,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-ace8908665,},FirstTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,LastTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-ace8908665,}" Dec 16 13:07:24.569881 kubelet[2798]: I1216 13:07:24.569490 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:24.570129 kubelet[2798]: I1216 13:07:24.570117 2798 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:24.570245 kubelet[2798]: I1216 13:07:24.570230 2798 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:24.570994 kubelet[2798]: I1216 13:07:24.570973 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:24.572040 kubelet[2798]: I1216 13:07:24.572009 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:24.572760 kubelet[2798]: I1216 13:07:24.572740 2798 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:07:24.572863 kubelet[2798]: E1216 13:07:24.572845 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:24.573200 kubelet[2798]: I1216 13:07:24.573182 2798 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:07:24.573238 kubelet[2798]: I1216 13:07:24.573217 2798 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:07:24.574348 kubelet[2798]: I1216 13:07:24.574318 2798 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:07:24.575055 kubelet[2798]: W1216 13:07:24.575012 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:24.575108 kubelet[2798]: E1216 13:07:24.575057 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:24.575151 kubelet[2798]: E1216 13:07:24.575111 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="200ms" Dec 16 13:07:24.578372 kubelet[2798]: I1216 13:07:24.578187 2798 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:07:24.578372 kubelet[2798]: I1216 13:07:24.578250 2798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:24.580682 kubelet[2798]: I1216 13:07:24.580147 2798 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:07:24.588148 kubelet[2798]: I1216 13:07:24.588112 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:24.589028 kubelet[2798]: I1216 13:07:24.589004 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:24.589028 kubelet[2798]: I1216 13:07:24.589025 2798 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:07:24.589114 kubelet[2798]: I1216 13:07:24.589041 2798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:24.589114 kubelet[2798]: I1216 13:07:24.589047 2798 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:07:24.589114 kubelet[2798]: E1216 13:07:24.589087 2798 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:24.594584 kubelet[2798]: W1216 13:07:24.594541 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:24.594653 kubelet[2798]: E1216 13:07:24.594594 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:24.594719 kubelet[2798]: E1216 13:07:24.594706 2798 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:24.603151 kubelet[2798]: I1216 13:07:24.602992 2798 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:24.603151 kubelet[2798]: I1216 13:07:24.603003 2798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:24.603151 kubelet[2798]: I1216 13:07:24.603024 2798 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:24.608544 kubelet[2798]: I1216 13:07:24.608533 2798 policy_none.go:49] "None policy: Start" Dec 16 13:07:24.608602 kubelet[2798]: I1216 13:07:24.608598 2798 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:07:24.608631 kubelet[2798]: I1216 13:07:24.608627 2798 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:07:24.616936 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:07:24.626340 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:07:24.629006 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:07:24.647165 kubelet[2798]: I1216 13:07:24.647147 2798 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:07:24.647519 kubelet[2798]: I1216 13:07:24.647280 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:24.647519 kubelet[2798]: I1216 13:07:24.647389 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:24.647591 kubelet[2798]: I1216 13:07:24.647579 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:24.648918 kubelet[2798]: E1216 13:07:24.648898 2798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:24.648976 kubelet[2798]: E1216 13:07:24.648957 2798 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:24.697266 systemd[1]: Created slice kubepods-burstable-podf9ad2dcfa0c4e0e6878c4094ed9e89bb.slice - libcontainer container kubepods-burstable-podf9ad2dcfa0c4e0e6878c4094ed9e89bb.slice. Dec 16 13:07:24.712699 kubelet[2798]: E1216 13:07:24.712653 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.715350 systemd[1]: Created slice kubepods-burstable-pod7a3592e225c411b7ffb26cf502895fbd.slice - libcontainer container kubepods-burstable-pod7a3592e225c411b7ffb26cf502895fbd.slice. Dec 16 13:07:24.722505 kubelet[2798]: E1216 13:07:24.722486 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.724902 systemd[1]: Created slice kubepods-burstable-pod2bdf2638f09f9b3d204e7504500bdd1f.slice - libcontainer container kubepods-burstable-pod2bdf2638f09f9b3d204e7504500bdd1f.slice. Dec 16 13:07:24.726099 kubelet[2798]: E1216 13:07:24.726082 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.749057 kubelet[2798]: I1216 13:07:24.749042 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.749319 kubelet[2798]: E1216 13:07:24.749285 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.775746 kubelet[2798]: E1216 13:07:24.775717 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="400ms" Dec 16 13:07:24.874112 kubelet[2798]: I1216 13:07:24.873978 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bdf2638f09f9b3d204e7504500bdd1f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-ace8908665\" (UID: \"2bdf2638f09f9b3d204e7504500bdd1f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874112 kubelet[2798]: I1216 13:07:24.874021 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874112 kubelet[2798]: I1216 13:07:24.874038 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874112 kubelet[2798]: I1216 13:07:24.874055 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874112 kubelet[2798]: I1216 13:07:24.874071 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874390 kubelet[2798]: I1216 13:07:24.874099 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874390 kubelet[2798]: I1216 13:07:24.874115 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874390 kubelet[2798]: I1216 13:07:24.874130 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.874390 kubelet[2798]: I1216 13:07:24.874144 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.951300 kubelet[2798]: I1216 13:07:24.951282 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:24.951595 kubelet[2798]: E1216 13:07:24.951552 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:25.013545 containerd[1739]: time="2025-12-16T13:07:25.013497034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-ace8908665,Uid:f9ad2dcfa0c4e0e6878c4094ed9e89bb,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:25.023949 containerd[1739]: time="2025-12-16T13:07:25.023920647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-ace8908665,Uid:7a3592e225c411b7ffb26cf502895fbd,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:25.026539 containerd[1739]: time="2025-12-16T13:07:25.026514849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-ace8908665,Uid:2bdf2638f09f9b3d204e7504500bdd1f,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:25.176328 kubelet[2798]: E1216 13:07:25.176288 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="800ms" Dec 16 13:07:25.353336 kubelet[2798]: I1216 13:07:25.353308 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:25.353743 kubelet[2798]: E1216 13:07:25.353716 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:25.412461 kubelet[2798]: W1216 13:07:25.412403 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:25.412538 kubelet[2798]: E1216 13:07:25.412469 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:25.612892 kubelet[2798]: W1216 13:07:25.612776 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:25.612892 kubelet[2798]: E1216 13:07:25.612824 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:25.704994 kubelet[2798]: W1216 13:07:25.704945 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:25.705093 kubelet[2798]: E1216 13:07:25.705001 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:25.809367 containerd[1739]: time="2025-12-16T13:07:25.809321024Z" level=info msg="connecting to shim ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc" address="unix:///run/containerd/s/b2e59ec97687b9bd8945191821c2ccd906b04599282036024da9583792885ed3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:25.837833 systemd[1]: Started cri-containerd-ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc.scope - libcontainer container ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc. Dec 16 13:07:25.852188 kubelet[2798]: W1216 13:07:25.851995 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:25.852188 kubelet[2798]: E1216 13:07:25.852159 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:25.858816 containerd[1739]: time="2025-12-16T13:07:25.858732587Z" level=info msg="connecting to shim 0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6" address="unix:///run/containerd/s/0b75eaf7df11e05de8ab6058ee5bc4434047de3a8a6025f5c8946f1acf17eea4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:25.889850 systemd[1]: Started cri-containerd-0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6.scope - libcontainer container 0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6. Dec 16 13:07:25.941447 containerd[1739]: time="2025-12-16T13:07:25.941402977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-ace8908665,Uid:7a3592e225c411b7ffb26cf502895fbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc\"" Dec 16 13:07:25.943554 containerd[1739]: time="2025-12-16T13:07:25.943532221Z" level=info msg="CreateContainer within sandbox \"ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:07:25.952068 containerd[1739]: time="2025-12-16T13:07:25.952007924Z" level=info msg="connecting to shim 00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03" address="unix:///run/containerd/s/23a2b1e7f1dea14d606085d673b81675ac1b787bde5ed2634413b8d91b04a8ff" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:25.970785 systemd[1]: Started cri-containerd-00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03.scope - libcontainer container 00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03. Dec 16 13:07:25.977055 kubelet[2798]: E1216 13:07:25.976997 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="1.6s" Dec 16 13:07:25.992609 containerd[1739]: time="2025-12-16T13:07:25.992541843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-ace8908665,Uid:f9ad2dcfa0c4e0e6878c4094ed9e89bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6\"" Dec 16 13:07:25.994894 containerd[1739]: time="2025-12-16T13:07:25.994363701Z" level=info msg="CreateContainer within sandbox \"0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:07:26.155378 kubelet[2798]: I1216 13:07:26.155358 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:26.155745 kubelet[2798]: E1216 13:07:26.155722 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:26.607093 kubelet[2798]: E1216 13:07:26.606995 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:27.482111 kubelet[2798]: E1216 13:07:27.482004 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-ace8908665.1881b400d43b6eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-ace8908665,UID:ci-4459.2.2-a-ace8908665,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-ace8908665,},FirstTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,LastTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-ace8908665,}" Dec 16 13:07:27.577941 kubelet[2798]: E1216 13:07:27.577907 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="3.2s" Dec 16 13:07:27.639415 kubelet[2798]: W1216 13:07:27.639365 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:27.639733 kubelet[2798]: E1216 13:07:27.639423 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:27.697429 containerd[1739]: time="2025-12-16T13:07:27.697390819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-ace8908665,Uid:2bdf2638f09f9b3d204e7504500bdd1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03\"" Dec 16 13:07:27.699558 containerd[1739]: time="2025-12-16T13:07:27.699524718Z" level=info msg="CreateContainer within sandbox \"00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:07:27.734324 kubelet[2798]: W1216 13:07:27.734252 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:27.734324 kubelet[2798]: E1216 13:07:27.734289 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:27.757871 kubelet[2798]: I1216 13:07:27.757856 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:27.758142 kubelet[2798]: E1216 13:07:27.758110 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:28.141613 kubelet[2798]: W1216 13:07:28.141491 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:28.141613 kubelet[2798]: E1216 13:07:28.141555 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:28.515941 kubelet[2798]: W1216 13:07:28.515883 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:28.515941 kubelet[2798]: E1216 13:07:28.515949 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:28.550729 containerd[1739]: time="2025-12-16T13:07:28.550687976Z" level=info msg="Container b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:29.613478 waagent[1924]: 2025-12-16T13:07:29.613427Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Dec 16 13:07:29.619773 waagent[1924]: 2025-12-16T13:07:29.619738Z INFO ExtHandler Dec 16 13:07:29.619875 waagent[1924]: 2025-12-16T13:07:29.619818Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: df0ac5c0-b168-4601-aab9-4095eb0ce165 eTag: 10534318606606748491 source: Fabric] Dec 16 13:07:29.620082 waagent[1924]: 2025-12-16T13:07:29.620057Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:07:29.702804 waagent[1924]: 2025-12-16T13:07:29.699893Z INFO ExtHandler Dec 16 13:07:29.702804 waagent[1924]: 2025-12-16T13:07:29.700035Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Dec 16 13:07:29.706679 containerd[1739]: time="2025-12-16T13:07:29.704958270Z" level=info msg="Container bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:29.707023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674242185.mount: Deactivated successfully. Dec 16 13:07:29.755677 waagent[1924]: 2025-12-16T13:07:29.755620Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:07:30.778386 kubelet[2798]: E1216 13:07:30.778335 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-ace8908665?timeout=10s\": dial tcp 10.200.0.12:6443: connect: connection refused" interval="6.4s" Dec 16 13:07:30.953892 kubelet[2798]: E1216 13:07:30.953856 2798 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:30.960453 kubelet[2798]: I1216 13:07:30.960422 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:30.960739 kubelet[2798]: E1216 13:07:30.960717 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.12:6443/api/v1/nodes\": dial tcp 10.200.0.12:6443: connect: connection refused" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:31.681810 kubelet[2798]: W1216 13:07:31.681761 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:31.681810 kubelet[2798]: E1216 13:07:31.681814 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:32.238995 kubelet[2798]: W1216 13:07:32.238953 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:32.239332 kubelet[2798]: E1216 13:07:32.239003 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:32.700787 kubelet[2798]: W1216 13:07:32.700753 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:32.700913 kubelet[2798]: E1216 13:07:32.700794 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-ace8908665&limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.591905Z INFO ExtHandler Downloaded certificate {'thumbprint': '19B43CF5D33DF3B29315D443002B3103E941229D', 'hasPrivateKey': True} Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.592507Z INFO ExtHandler Fetch goal state completed Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.592835Z INFO ExtHandler ExtHandler Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.592880Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 4438ce4c-b040-4de6-9abd-0094360083fd correlation 5bce89c4-dcc5-4caa-b441-76934e1b8f47 created: 2025-12-16T13:07:20.648499Z] Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.593086Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:07:33.593757 waagent[1924]: 2025-12-16T13:07:33.593533Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Dec 16 13:07:34.133375 kubelet[2798]: W1216 13:07:34.133325 2798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.0.12:6443: connect: connection refused Dec 16 13:07:34.133375 kubelet[2798]: E1216 13:07:34.133378 2798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.12:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:07:34.649369 kubelet[2798]: E1216 13:07:34.649296 2798 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:35.243653 containerd[1739]: time="2025-12-16T13:07:35.243615917Z" level=info msg="CreateContainer within sandbox \"ba4fcb433a85ba692fec15ff16ca15b48ddd77284d6c008d342443a2f623d5cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8\"" Dec 16 13:07:35.244358 containerd[1739]: time="2025-12-16T13:07:35.244319609Z" level=info msg="StartContainer for \"b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8\"" Dec 16 13:07:35.245241 containerd[1739]: time="2025-12-16T13:07:35.245211832Z" level=info msg="connecting to shim b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8" address="unix:///run/containerd/s/b2e59ec97687b9bd8945191821c2ccd906b04599282036024da9583792885ed3" protocol=ttrpc version=3 Dec 16 13:07:35.267812 systemd[1]: Started cri-containerd-b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8.scope - libcontainer container b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8. Dec 16 13:07:35.340188 containerd[1739]: time="2025-12-16T13:07:35.340142469Z" level=info msg="StartContainer for \"b43669811d3f006fcd010372ed06d882e461e587ef35baf3d4ebd304009e54b8\" returns successfully" Dec 16 13:07:35.400099 containerd[1739]: time="2025-12-16T13:07:35.400070776Z" level=info msg="CreateContainer within sandbox \"0f3267ce81c6a98ed9fa82ee5e808b341f795415c85762e31bbb9728aeb25da6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f\"" Dec 16 13:07:35.400815 containerd[1739]: time="2025-12-16T13:07:35.400678655Z" level=info msg="StartContainer for \"bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f\"" Dec 16 13:07:35.445607 containerd[1739]: time="2025-12-16T13:07:35.445572027Z" level=info msg="connecting to shim bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f" address="unix:///run/containerd/s/0b75eaf7df11e05de8ab6058ee5bc4434047de3a8a6025f5c8946f1acf17eea4" protocol=ttrpc version=3 Dec 16 13:07:35.458377 containerd[1739]: time="2025-12-16T13:07:35.458342998Z" level=info msg="Container afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:35.471965 systemd[1]: Started cri-containerd-bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f.scope - libcontainer container bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f. Dec 16 13:07:35.544046 containerd[1739]: time="2025-12-16T13:07:35.543392991Z" level=info msg="StartContainer for \"bb96216df4b97c534e766070417e080a7125f93db5d453547387c2bb430c4b3f\" returns successfully" Dec 16 13:07:35.620021 containerd[1739]: time="2025-12-16T13:07:35.619996081Z" level=info msg="CreateContainer within sandbox \"00af4535e43ac87667aee8bad4bea316fc369240285022de9c2ee621f6646b03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae\"" Dec 16 13:07:35.622130 containerd[1739]: time="2025-12-16T13:07:35.622080272Z" level=info msg="StartContainer for \"afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae\"" Dec 16 13:07:35.623532 containerd[1739]: time="2025-12-16T13:07:35.623509511Z" level=info msg="connecting to shim afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae" address="unix:///run/containerd/s/23a2b1e7f1dea14d606085d673b81675ac1b787bde5ed2634413b8d91b04a8ff" protocol=ttrpc version=3 Dec 16 13:07:35.624690 kubelet[2798]: E1216 13:07:35.624256 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:35.631487 kubelet[2798]: E1216 13:07:35.631470 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:35.652003 systemd[1]: Started cri-containerd-afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae.scope - libcontainer container afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae. Dec 16 13:07:35.832591 containerd[1739]: time="2025-12-16T13:07:35.832487875Z" level=info msg="StartContainer for \"afcdeac4293e8bfc66ff6db35fa70280a146e00d213f89bcaaea7ffa7a774dae\" returns successfully" Dec 16 13:07:36.631123 kubelet[2798]: E1216 13:07:36.630952 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:36.631954 kubelet[2798]: E1216 13:07:36.631929 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:36.632242 kubelet[2798]: E1216 13:07:36.632112 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:37.365046 kubelet[2798]: I1216 13:07:37.365004 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:37.639140 kubelet[2798]: E1216 13:07:37.638066 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:37.639140 kubelet[2798]: E1216 13:07:37.638293 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:37.989579 kubelet[2798]: E1216 13:07:37.989524 2798 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:38.095259 kubelet[2798]: E1216 13:07:38.095150 2798 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.2-a-ace8908665.1881b400d43b6eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-ace8908665,UID:ci-4459.2.2-a-ace8908665,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-ace8908665,},FirstTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,LastTimestamp:2025-12-16 13:07:24.55960953 +0000 UTC m=+0.271161875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-ace8908665,}" Dec 16 13:07:38.155699 kubelet[2798]: I1216 13:07:38.154742 2798 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:38.155699 kubelet[2798]: E1216 13:07:38.154775 2798 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-ace8908665\": node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.170691 kubelet[2798]: E1216 13:07:38.170219 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.271053 kubelet[2798]: E1216 13:07:38.270958 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.371792 kubelet[2798]: E1216 13:07:38.371766 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.472364 kubelet[2798]: E1216 13:07:38.472334 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.573487 kubelet[2798]: E1216 13:07:38.573379 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.637122 kubelet[2798]: E1216 13:07:38.637049 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:38.637464 kubelet[2798]: E1216 13:07:38.637269 2798 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-ace8908665\" not found" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:38.673836 kubelet[2798]: E1216 13:07:38.673819 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.774490 kubelet[2798]: E1216 13:07:38.774445 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.875002 kubelet[2798]: E1216 13:07:38.874888 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:38.975383 kubelet[2798]: E1216 13:07:38.975353 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.076218 kubelet[2798]: E1216 13:07:39.076193 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.176860 kubelet[2798]: E1216 13:07:39.176829 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.277414 kubelet[2798]: E1216 13:07:39.277386 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.378275 kubelet[2798]: E1216 13:07:39.378232 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.478717 kubelet[2798]: E1216 13:07:39.478620 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.578754 kubelet[2798]: E1216 13:07:39.578725 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.678964 kubelet[2798]: E1216 13:07:39.678935 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.779442 kubelet[2798]: E1216 13:07:39.779344 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.881170 kubelet[2798]: E1216 13:07:39.880400 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:39.981002 kubelet[2798]: E1216 13:07:39.980962 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.081897 kubelet[2798]: E1216 13:07:40.081816 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.182337 kubelet[2798]: E1216 13:07:40.182305 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.237287 systemd[1]: Reload requested from client PID 3069 ('systemctl') (unit session-9.scope)... Dec 16 13:07:40.237300 systemd[1]: Reloading... Dec 16 13:07:40.283085 kubelet[2798]: E1216 13:07:40.283057 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.318715 zram_generator::config[3115]: No configuration found. Dec 16 13:07:40.383324 kubelet[2798]: E1216 13:07:40.383240 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.483815 kubelet[2798]: E1216 13:07:40.483786 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:40.515370 systemd[1]: Reloading finished in 277 ms. Dec 16 13:07:40.538592 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:40.553369 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:07:40.553573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:40.553621 systemd[1]: kubelet.service: Consumed 590ms CPU time, 131.3M memory peak. Dec 16 13:07:40.554962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:42.880568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:42.885904 (kubelet)[3183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:42.926708 kubelet[3183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:42.926708 kubelet[3183]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:42.926708 kubelet[3183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:42.926954 kubelet[3183]: I1216 13:07:42.926785 3183 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:42.933140 kubelet[3183]: I1216 13:07:42.933114 3183 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:07:42.933140 kubelet[3183]: I1216 13:07:42.933135 3183 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:42.934691 kubelet[3183]: I1216 13:07:42.934273 3183 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:07:42.936710 kubelet[3183]: I1216 13:07:42.936620 3183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:07:42.940530 kubelet[3183]: I1216 13:07:42.940380 3183 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:42.945626 kubelet[3183]: I1216 13:07:42.945607 3183 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:42.947815 kubelet[3183]: I1216 13:07:42.947798 3183 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:07:42.947956 kubelet[3183]: I1216 13:07:42.947935 3183 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:42.948085 kubelet[3183]: I1216 13:07:42.947956 3183 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-ace8908665","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:42.948171 kubelet[3183]: I1216 13:07:42.948091 3183 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:42.948171 kubelet[3183]: I1216 13:07:42.948101 3183 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:07:42.948171 kubelet[3183]: I1216 13:07:42.948141 3183 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:42.948623 kubelet[3183]: I1216 13:07:42.948252 3183 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:07:42.948623 kubelet[3183]: I1216 13:07:42.948279 3183 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:42.948623 kubelet[3183]: I1216 13:07:42.948299 3183 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:07:42.948623 kubelet[3183]: I1216 13:07:42.948308 3183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:42.951709 kubelet[3183]: I1216 13:07:42.951691 3183 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:07:42.952126 kubelet[3183]: I1216 13:07:42.952115 3183 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:07:42.952545 kubelet[3183]: I1216 13:07:42.952533 3183 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:07:42.952616 kubelet[3183]: I1216 13:07:42.952610 3183 server.go:1287] "Started kubelet" Dec 16 13:07:42.957201 kubelet[3183]: I1216 13:07:42.956173 3183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:42.959832 kubelet[3183]: I1216 13:07:42.956390 3183 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:42.963445 kubelet[3183]: I1216 13:07:42.961563 3183 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:07:42.966271 kubelet[3183]: E1216 13:07:42.965741 3183 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:42.966676 kubelet[3183]: I1216 13:07:42.956642 3183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:42.967078 kubelet[3183]: I1216 13:07:42.966909 3183 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:07:42.967498 kubelet[3183]: E1216 13:07:42.967326 3183 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-ace8908665\" not found" Dec 16 13:07:42.968081 kubelet[3183]: I1216 13:07:42.956428 3183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:42.968636 kubelet[3183]: I1216 13:07:42.968387 3183 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:42.969472 kubelet[3183]: I1216 13:07:42.969066 3183 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:07:42.969911 kubelet[3183]: I1216 13:07:42.969738 3183 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:07:42.974394 kubelet[3183]: I1216 13:07:42.974366 3183 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:07:42.974470 kubelet[3183]: I1216 13:07:42.974435 3183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:42.979328 kubelet[3183]: I1216 13:07:42.978124 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:42.979328 kubelet[3183]: I1216 13:07:42.979153 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:42.979328 kubelet[3183]: I1216 13:07:42.979174 3183 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:07:42.979328 kubelet[3183]: I1216 13:07:42.979191 3183 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:42.979328 kubelet[3183]: I1216 13:07:42.979197 3183 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:07:42.979328 kubelet[3183]: E1216 13:07:42.979232 3183 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:42.981945 kubelet[3183]: I1216 13:07:42.981088 3183 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:07:43.029676 kubelet[3183]: I1216 13:07:43.029653 3183 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:43.029676 kubelet[3183]: I1216 13:07:43.029678 3183 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:43.029771 kubelet[3183]: I1216 13:07:43.029692 3183 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:43.029861 kubelet[3183]: I1216 13:07:43.029851 3183 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:07:43.029885 kubelet[3183]: I1216 13:07:43.029864 3183 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:07:43.029885 kubelet[3183]: I1216 13:07:43.029880 3183 policy_none.go:49] "None policy: Start" Dec 16 13:07:43.029927 kubelet[3183]: I1216 13:07:43.029890 3183 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:07:43.029927 kubelet[3183]: I1216 13:07:43.029898 3183 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:07:43.031291 kubelet[3183]: I1216 13:07:43.030034 3183 state_mem.go:75] "Updated machine memory state" Dec 16 13:07:43.039766 kubelet[3183]: I1216 13:07:43.039750 3183 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:07:43.040152 kubelet[3183]: I1216 13:07:43.039867 3183 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:43.040152 kubelet[3183]: I1216 13:07:43.039875 3183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:43.040152 kubelet[3183]: I1216 13:07:43.040053 3183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:43.044736 kubelet[3183]: E1216 13:07:43.042764 3183 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:43.080579 kubelet[3183]: I1216 13:07:43.080564 3183 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.080926 kubelet[3183]: I1216 13:07:43.080778 3183 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.082291 kubelet[3183]: I1216 13:07:43.080832 3183 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.088940 kubelet[3183]: W1216 13:07:43.088927 3183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:07:43.092623 kubelet[3183]: W1216 13:07:43.092605 3183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:07:43.093014 kubelet[3183]: W1216 13:07:43.092899 3183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:07:43.144153 kubelet[3183]: I1216 13:07:43.144093 3183 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.155817 kubelet[3183]: I1216 13:07:43.155788 3183 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.155889 kubelet[3183]: I1216 13:07:43.155866 3183 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.170945 kubelet[3183]: I1216 13:07:43.170728 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271463 kubelet[3183]: I1216 13:07:43.271416 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bdf2638f09f9b3d204e7504500bdd1f-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-ace8908665\" (UID: \"2bdf2638f09f9b3d204e7504500bdd1f\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271825 kubelet[3183]: I1216 13:07:43.271567 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271825 kubelet[3183]: I1216 13:07:43.271594 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271825 kubelet[3183]: I1216 13:07:43.271616 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271825 kubelet[3183]: I1216 13:07:43.271634 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271825 kubelet[3183]: I1216 13:07:43.271654 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271928 kubelet[3183]: I1216 13:07:43.271704 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ad2dcfa0c4e0e6878c4094ed9e89bb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-ace8908665\" (UID: \"f9ad2dcfa0c4e0e6878c4094ed9e89bb\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.271928 kubelet[3183]: I1216 13:07:43.271723 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a3592e225c411b7ffb26cf502895fbd-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-ace8908665\" (UID: \"7a3592e225c411b7ffb26cf502895fbd\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" Dec 16 13:07:43.954059 kubelet[3183]: I1216 13:07:43.954027 3183 apiserver.go:52] "Watching apiserver" Dec 16 13:07:43.970230 kubelet[3183]: I1216 13:07:43.970195 3183 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:07:44.018501 kubelet[3183]: I1216 13:07:44.018225 3183 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:44.038271 kubelet[3183]: W1216 13:07:44.038246 3183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 13:07:44.038369 kubelet[3183]: E1216 13:07:44.038291 3183 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-ace8908665\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" Dec 16 13:07:44.039098 kubelet[3183]: I1216 13:07:44.038761 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-ace8908665" podStartSLOduration=1.038749287 podStartE2EDuration="1.038749287s" podCreationTimestamp="2025-12-16 13:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:44.038532526 +0000 UTC m=+1.149212999" watchObservedRunningTime="2025-12-16 13:07:44.038749287 +0000 UTC m=+1.149429760" Dec 16 13:07:44.056941 kubelet[3183]: I1216 13:07:44.056900 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-ace8908665" podStartSLOduration=1.056886132 podStartE2EDuration="1.056886132s" podCreationTimestamp="2025-12-16 13:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:44.048272512 +0000 UTC m=+1.158952983" watchObservedRunningTime="2025-12-16 13:07:44.056886132 +0000 UTC m=+1.167566601" Dec 16 13:07:44.057090 kubelet[3183]: I1216 13:07:44.056986 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-ace8908665" podStartSLOduration=1.056981784 podStartE2EDuration="1.056981784s" podCreationTimestamp="2025-12-16 13:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:44.05662417 +0000 UTC m=+1.167304647" watchObservedRunningTime="2025-12-16 13:07:44.056981784 +0000 UTC m=+1.167662258" Dec 16 13:07:45.118347 kubelet[3183]: I1216 13:07:45.118316 3183 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:07:45.118716 containerd[1739]: time="2025-12-16T13:07:45.118620464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:07:45.119008 kubelet[3183]: I1216 13:07:45.118773 3183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:07:45.812773 systemd[1]: Created slice kubepods-besteffort-pod67deb279_75c6_41d3_856a_9394d316242d.slice - libcontainer container kubepods-besteffort-pod67deb279_75c6_41d3_856a_9394d316242d.slice. Dec 16 13:07:45.889505 kubelet[3183]: I1216 13:07:45.889346 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67deb279-75c6-41d3-856a-9394d316242d-kube-proxy\") pod \"kube-proxy-7x2zc\" (UID: \"67deb279-75c6-41d3-856a-9394d316242d\") " pod="kube-system/kube-proxy-7x2zc" Dec 16 13:07:45.889505 kubelet[3183]: I1216 13:07:45.889380 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67deb279-75c6-41d3-856a-9394d316242d-lib-modules\") pod \"kube-proxy-7x2zc\" (UID: \"67deb279-75c6-41d3-856a-9394d316242d\") " pod="kube-system/kube-proxy-7x2zc" Dec 16 13:07:45.889505 kubelet[3183]: I1216 13:07:45.889403 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67deb279-75c6-41d3-856a-9394d316242d-xtables-lock\") pod \"kube-proxy-7x2zc\" (UID: \"67deb279-75c6-41d3-856a-9394d316242d\") " pod="kube-system/kube-proxy-7x2zc" Dec 16 13:07:45.889505 kubelet[3183]: I1216 13:07:45.889422 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6g5\" (UniqueName: \"kubernetes.io/projected/67deb279-75c6-41d3-856a-9394d316242d-kube-api-access-ph6g5\") pod \"kube-proxy-7x2zc\" (UID: \"67deb279-75c6-41d3-856a-9394d316242d\") " pod="kube-system/kube-proxy-7x2zc" Dec 16 13:07:46.003746 kubelet[3183]: E1216 13:07:46.003713 3183 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:07:46.003746 kubelet[3183]: E1216 13:07:46.003740 3183 projected.go:194] Error preparing data for projected volume kube-api-access-ph6g5 for pod kube-system/kube-proxy-7x2zc: configmap "kube-root-ca.crt" not found Dec 16 13:07:46.003891 kubelet[3183]: E1216 13:07:46.003814 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/67deb279-75c6-41d3-856a-9394d316242d-kube-api-access-ph6g5 podName:67deb279-75c6-41d3-856a-9394d316242d nodeName:}" failed. No retries permitted until 2025-12-16 13:07:46.503793082 +0000 UTC m=+3.614473551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ph6g5" (UniqueName: "kubernetes.io/projected/67deb279-75c6-41d3-856a-9394d316242d-kube-api-access-ph6g5") pod "kube-proxy-7x2zc" (UID: "67deb279-75c6-41d3-856a-9394d316242d") : configmap "kube-root-ca.crt" not found Dec 16 13:07:46.237658 systemd[1]: Created slice kubepods-besteffort-pod49a4f273_becc_475f_b1ac_b48a6e33c225.slice - libcontainer container kubepods-besteffort-pod49a4f273_becc_475f_b1ac_b48a6e33c225.slice. Dec 16 13:07:46.292244 kubelet[3183]: I1216 13:07:46.292212 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzp9\" (UniqueName: \"kubernetes.io/projected/49a4f273-becc-475f-b1ac-b48a6e33c225-kube-api-access-qwzp9\") pod \"tigera-operator-7dcd859c48-m6htj\" (UID: \"49a4f273-becc-475f-b1ac-b48a6e33c225\") " pod="tigera-operator/tigera-operator-7dcd859c48-m6htj" Dec 16 13:07:46.292509 kubelet[3183]: I1216 13:07:46.292257 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49a4f273-becc-475f-b1ac-b48a6e33c225-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m6htj\" (UID: \"49a4f273-becc-475f-b1ac-b48a6e33c225\") " pod="tigera-operator/tigera-operator-7dcd859c48-m6htj" Dec 16 13:07:46.542580 containerd[1739]: time="2025-12-16T13:07:46.542482813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m6htj,Uid:49a4f273-becc-475f-b1ac-b48a6e33c225,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:07:46.585075 containerd[1739]: time="2025-12-16T13:07:46.585005773Z" level=info msg="connecting to shim eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa" address="unix:///run/containerd/s/437a16c5944af84110f1d1b093b0a07b4f44bbdb9e61ebd6880c61a227a51b9e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:46.611808 systemd[1]: Started cri-containerd-eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa.scope - libcontainer container eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa. Dec 16 13:07:46.648437 containerd[1739]: time="2025-12-16T13:07:46.648408945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m6htj,Uid:49a4f273-becc-475f-b1ac-b48a6e33c225,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa\"" Dec 16 13:07:46.650906 containerd[1739]: time="2025-12-16T13:07:46.650869158Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:07:46.722905 containerd[1739]: time="2025-12-16T13:07:46.722882108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7x2zc,Uid:67deb279-75c6-41d3-856a-9394d316242d,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:46.762999 containerd[1739]: time="2025-12-16T13:07:46.762970908Z" level=info msg="connecting to shim 5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157" address="unix:///run/containerd/s/b271c9a9021d2e6c9ed1919d2ead6d8d96115d05ee61ca1af527de79a9f1f966" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:46.785788 systemd[1]: Started cri-containerd-5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157.scope - libcontainer container 5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157. Dec 16 13:07:46.807557 containerd[1739]: time="2025-12-16T13:07:46.807308063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7x2zc,Uid:67deb279-75c6-41d3-856a-9394d316242d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157\"" Dec 16 13:07:46.810339 containerd[1739]: time="2025-12-16T13:07:46.810302202Z" level=info msg="CreateContainer within sandbox \"5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:07:46.830882 containerd[1739]: time="2025-12-16T13:07:46.830856976Z" level=info msg="Container 9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:46.847889 containerd[1739]: time="2025-12-16T13:07:46.847865593Z" level=info msg="CreateContainer within sandbox \"5532c622ae493adb8c4b6caf420a1f637ba94c88a055a65b2d1cd81419075157\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3\"" Dec 16 13:07:46.848440 containerd[1739]: time="2025-12-16T13:07:46.848417564Z" level=info msg="StartContainer for \"9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3\"" Dec 16 13:07:46.849806 containerd[1739]: time="2025-12-16T13:07:46.849779371Z" level=info msg="connecting to shim 9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3" address="unix:///run/containerd/s/b271c9a9021d2e6c9ed1919d2ead6d8d96115d05ee61ca1af527de79a9f1f966" protocol=ttrpc version=3 Dec 16 13:07:46.864826 systemd[1]: Started cri-containerd-9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3.scope - libcontainer container 9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3. Dec 16 13:07:46.930791 containerd[1739]: time="2025-12-16T13:07:46.930758961Z" level=info msg="StartContainer for \"9ace6e241db08e0863216b16e4ccd524eb57f1dc08a2eff91e981715fbad2fd3\" returns successfully" Dec 16 13:07:47.039328 kubelet[3183]: I1216 13:07:47.039273 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7x2zc" podStartSLOduration=2.039257287 podStartE2EDuration="2.039257287s" podCreationTimestamp="2025-12-16 13:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:47.039030081 +0000 UTC m=+4.149710553" watchObservedRunningTime="2025-12-16 13:07:47.039257287 +0000 UTC m=+4.149937756" Dec 16 13:07:49.177018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408805932.mount: Deactivated successfully. Dec 16 13:07:49.601676 containerd[1739]: time="2025-12-16T13:07:49.601561057Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:49.604237 containerd[1739]: time="2025-12-16T13:07:49.604212862Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:07:49.607326 containerd[1739]: time="2025-12-16T13:07:49.607287206Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:49.612921 containerd[1739]: time="2025-12-16T13:07:49.612872426Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:49.613706 containerd[1739]: time="2025-12-16T13:07:49.613320804Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.962402922s" Dec 16 13:07:49.613706 containerd[1739]: time="2025-12-16T13:07:49.613348550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:07:49.615697 containerd[1739]: time="2025-12-16T13:07:49.615485840Z" level=info msg="CreateContainer within sandbox \"eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:07:49.642968 containerd[1739]: time="2025-12-16T13:07:49.642941035Z" level=info msg="Container f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:49.662255 containerd[1739]: time="2025-12-16T13:07:49.662231842Z" level=info msg="CreateContainer within sandbox \"eb3b05b826756b67371855ea702581ef6a1438d9cde30d3a5332b3836a3768fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4\"" Dec 16 13:07:49.662723 containerd[1739]: time="2025-12-16T13:07:49.662699887Z" level=info msg="StartContainer for \"f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4\"" Dec 16 13:07:49.663382 containerd[1739]: time="2025-12-16T13:07:49.663354845Z" level=info msg="connecting to shim f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4" address="unix:///run/containerd/s/437a16c5944af84110f1d1b093b0a07b4f44bbdb9e61ebd6880c61a227a51b9e" protocol=ttrpc version=3 Dec 16 13:07:49.687807 systemd[1]: Started cri-containerd-f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4.scope - libcontainer container f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4. Dec 16 13:07:49.716628 containerd[1739]: time="2025-12-16T13:07:49.716603893Z" level=info msg="StartContainer for \"f1891fe1f618b919d9866136a3dc3c3bd6d1bc0a4b31a94ee2ebddc12411a9c4\" returns successfully" Dec 16 13:07:50.054428 kubelet[3183]: I1216 13:07:50.054375 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m6htj" podStartSLOduration=1.089691363 podStartE2EDuration="4.054362161s" podCreationTimestamp="2025-12-16 13:07:46 +0000 UTC" firstStartedPulling="2025-12-16 13:07:46.649510227 +0000 UTC m=+3.760190692" lastFinishedPulling="2025-12-16 13:07:49.614181016 +0000 UTC m=+6.724861490" observedRunningTime="2025-12-16 13:07:50.054110046 +0000 UTC m=+7.164790521" watchObservedRunningTime="2025-12-16 13:07:50.054362161 +0000 UTC m=+7.165042632" Dec 16 13:07:55.359409 sudo[2183]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:55.448885 sshd[2182]: Connection closed by 10.200.16.10 port 37804 Dec 16 13:07:55.446923 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:55.452899 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:07:55.454100 systemd[1]: sshd@6-10.200.0.12:22-10.200.16.10:37804.service: Deactivated successfully. Dec 16 13:07:55.457319 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:07:55.457479 systemd[1]: session-9.scope: Consumed 2.985s CPU time, 227.6M memory peak. Dec 16 13:07:55.461491 systemd-logind[1709]: Removed session 9. Dec 16 13:07:59.824985 systemd[1]: Created slice kubepods-besteffort-poddadfd034_7fdd_497c_a17f_9033ed70ccd7.slice - libcontainer container kubepods-besteffort-poddadfd034_7fdd_497c_a17f_9033ed70ccd7.slice. Dec 16 13:07:59.875646 kubelet[3183]: I1216 13:07:59.875612 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6bs\" (UniqueName: \"kubernetes.io/projected/dadfd034-7fdd-497c-a17f-9033ed70ccd7-kube-api-access-ql6bs\") pod \"calico-typha-69d89ff549-p87fm\" (UID: \"dadfd034-7fdd-497c-a17f-9033ed70ccd7\") " pod="calico-system/calico-typha-69d89ff549-p87fm" Dec 16 13:07:59.875646 kubelet[3183]: I1216 13:07:59.875648 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dadfd034-7fdd-497c-a17f-9033ed70ccd7-tigera-ca-bundle\") pod \"calico-typha-69d89ff549-p87fm\" (UID: \"dadfd034-7fdd-497c-a17f-9033ed70ccd7\") " pod="calico-system/calico-typha-69d89ff549-p87fm" Dec 16 13:07:59.875948 kubelet[3183]: I1216 13:07:59.875675 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dadfd034-7fdd-497c-a17f-9033ed70ccd7-typha-certs\") pod \"calico-typha-69d89ff549-p87fm\" (UID: \"dadfd034-7fdd-497c-a17f-9033ed70ccd7\") " pod="calico-system/calico-typha-69d89ff549-p87fm" Dec 16 13:08:00.012359 systemd[1]: Created slice kubepods-besteffort-pod1bb934ad_05f4_4924_a57e_f48ae976bff0.slice - libcontainer container kubepods-besteffort-pod1bb934ad_05f4_4924_a57e_f48ae976bff0.slice. Dec 16 13:08:00.076506 kubelet[3183]: I1216 13:08:00.076244 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1bb934ad-05f4-4924-a57e-f48ae976bff0-node-certs\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076506 kubelet[3183]: I1216 13:08:00.076274 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-var-lib-calico\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076506 kubelet[3183]: I1216 13:08:00.076292 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-lib-modules\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076506 kubelet[3183]: I1216 13:08:00.076308 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-cni-bin-dir\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076506 kubelet[3183]: I1216 13:08:00.076326 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-cni-log-dir\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076723 kubelet[3183]: I1216 13:08:00.076340 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-policysync\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076723 kubelet[3183]: I1216 13:08:00.076358 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb934ad-05f4-4924-a57e-f48ae976bff0-tigera-ca-bundle\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076723 kubelet[3183]: I1216 13:08:00.076373 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-var-run-calico\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076723 kubelet[3183]: I1216 13:08:00.076391 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-xtables-lock\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076723 kubelet[3183]: I1216 13:08:00.076410 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85pq8\" (UniqueName: \"kubernetes.io/projected/1bb934ad-05f4-4924-a57e-f48ae976bff0-kube-api-access-85pq8\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076807 kubelet[3183]: I1216 13:08:00.076431 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-flexvol-driver-host\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.076807 kubelet[3183]: I1216 13:08:00.076451 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1bb934ad-05f4-4924-a57e-f48ae976bff0-cni-net-dir\") pod \"calico-node-4266q\" (UID: \"1bb934ad-05f4-4924-a57e-f48ae976bff0\") " pod="calico-system/calico-node-4266q" Dec 16 13:08:00.132894 containerd[1739]: time="2025-12-16T13:08:00.132856350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d89ff549-p87fm,Uid:dadfd034-7fdd-497c-a17f-9033ed70ccd7,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:00.187198 kubelet[3183]: E1216 13:08:00.187161 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.187198 kubelet[3183]: W1216 13:08:00.187184 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.187336 kubelet[3183]: E1216 13:08:00.187220 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.200256 kubelet[3183]: E1216 13:08:00.200233 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.200256 kubelet[3183]: W1216 13:08:00.200253 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.200381 kubelet[3183]: E1216 13:08:00.200269 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.202443 containerd[1739]: time="2025-12-16T13:08:00.202381294Z" level=info msg="connecting to shim 948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e" address="unix:///run/containerd/s/6d178d1e4dd0479aeae5b1020fb4759eb6203a2b893204475fa224810c012c06" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:00.219691 kubelet[3183]: E1216 13:08:00.219614 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.219764 kubelet[3183]: W1216 13:08:00.219628 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.219764 kubelet[3183]: E1216 13:08:00.219746 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.240819 systemd[1]: Started cri-containerd-948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e.scope - libcontainer container 948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e. Dec 16 13:08:00.257588 kubelet[3183]: E1216 13:08:00.257557 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:00.264714 kubelet[3183]: E1216 13:08:00.264695 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.264714 kubelet[3183]: W1216 13:08:00.264715 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.264828 kubelet[3183]: E1216 13:08:00.264730 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.264898 kubelet[3183]: E1216 13:08:00.264890 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.264925 kubelet[3183]: W1216 13:08:00.264899 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.264925 kubelet[3183]: E1216 13:08:00.264908 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265027 kubelet[3183]: E1216 13:08:00.265020 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265052 kubelet[3183]: W1216 13:08:00.265028 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265052 kubelet[3183]: E1216 13:08:00.265035 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265189 kubelet[3183]: E1216 13:08:00.265182 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265213 kubelet[3183]: W1216 13:08:00.265190 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265213 kubelet[3183]: E1216 13:08:00.265197 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265320 kubelet[3183]: E1216 13:08:00.265313 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265347 kubelet[3183]: W1216 13:08:00.265322 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265347 kubelet[3183]: E1216 13:08:00.265329 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265434 kubelet[3183]: E1216 13:08:00.265426 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265459 kubelet[3183]: W1216 13:08:00.265434 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265459 kubelet[3183]: E1216 13:08:00.265441 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265526 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265880 kubelet[3183]: W1216 13:08:00.265531 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265546 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265631 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265880 kubelet[3183]: W1216 13:08:00.265637 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265644 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265762 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.265880 kubelet[3183]: W1216 13:08:00.265767 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.265880 kubelet[3183]: E1216 13:08:00.265774 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.266327 kubelet[3183]: E1216 13:08:00.266312 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.266375 kubelet[3183]: W1216 13:08:00.266327 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.266458 kubelet[3183]: E1216 13:08:00.266416 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.266655 kubelet[3183]: E1216 13:08:00.266644 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.266745 kubelet[3183]: W1216 13:08:00.266657 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.266865 kubelet[3183]: E1216 13:08:00.266753 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.268218 kubelet[3183]: E1216 13:08:00.266969 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.268218 kubelet[3183]: W1216 13:08:00.267702 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.268218 kubelet[3183]: E1216 13:08:00.267716 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.269797 kubelet[3183]: E1216 13:08:00.269782 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.269885 kubelet[3183]: W1216 13:08:00.269877 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.269934 kubelet[3183]: E1216 13:08:00.269927 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.270145 kubelet[3183]: E1216 13:08:00.270139 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.270187 kubelet[3183]: W1216 13:08:00.270181 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.270232 kubelet[3183]: E1216 13:08:00.270226 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.270380 kubelet[3183]: E1216 13:08:00.270375 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.270493 kubelet[3183]: W1216 13:08:00.270415 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.270493 kubelet[3183]: E1216 13:08:00.270435 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.270611 kubelet[3183]: E1216 13:08:00.270606 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.270679 kubelet[3183]: W1216 13:08:00.270644 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.270679 kubelet[3183]: E1216 13:08:00.270653 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.270893 kubelet[3183]: E1216 13:08:00.270860 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.270893 kubelet[3183]: W1216 13:08:00.270866 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.270893 kubelet[3183]: E1216 13:08:00.270873 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.271101 kubelet[3183]: E1216 13:08:00.271069 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.271101 kubelet[3183]: W1216 13:08:00.271075 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.271101 kubelet[3183]: E1216 13:08:00.271081 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.271452 kubelet[3183]: E1216 13:08:00.271260 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.271452 kubelet[3183]: W1216 13:08:00.271267 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.271452 kubelet[3183]: E1216 13:08:00.271273 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.272708 kubelet[3183]: E1216 13:08:00.271686 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.272708 kubelet[3183]: W1216 13:08:00.271694 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.272708 kubelet[3183]: E1216 13:08:00.271702 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.280841 kubelet[3183]: E1216 13:08:00.280753 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.280841 kubelet[3183]: W1216 13:08:00.280766 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.280841 kubelet[3183]: E1216 13:08:00.280778 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.280841 kubelet[3183]: I1216 13:08:00.280806 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b24eac48-5262-426b-9b2f-c5c56fc3732b-kubelet-dir\") pod \"csi-node-driver-bv5h8\" (UID: \"b24eac48-5262-426b-9b2f-c5c56fc3732b\") " pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:00.281203 kubelet[3183]: E1216 13:08:00.281104 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.281203 kubelet[3183]: W1216 13:08:00.281118 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.281203 kubelet[3183]: E1216 13:08:00.281139 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.281203 kubelet[3183]: I1216 13:08:00.281158 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b24eac48-5262-426b-9b2f-c5c56fc3732b-registration-dir\") pod \"csi-node-driver-bv5h8\" (UID: \"b24eac48-5262-426b-9b2f-c5c56fc3732b\") " pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:00.281653 kubelet[3183]: E1216 13:08:00.281457 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.281653 kubelet[3183]: W1216 13:08:00.281469 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.281653 kubelet[3183]: E1216 13:08:00.281489 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.281653 kubelet[3183]: I1216 13:08:00.281506 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b24eac48-5262-426b-9b2f-c5c56fc3732b-socket-dir\") pod \"csi-node-driver-bv5h8\" (UID: \"b24eac48-5262-426b-9b2f-c5c56fc3732b\") " pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:00.282348 kubelet[3183]: E1216 13:08:00.282208 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.282348 kubelet[3183]: W1216 13:08:00.282223 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.282348 kubelet[3183]: E1216 13:08:00.282243 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.282464 kubelet[3183]: E1216 13:08:00.282428 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.282464 kubelet[3183]: W1216 13:08:00.282438 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.282464 kubelet[3183]: E1216 13:08:00.282450 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.282855 kubelet[3183]: I1216 13:08:00.282690 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b24eac48-5262-426b-9b2f-c5c56fc3732b-varrun\") pod \"csi-node-driver-bv5h8\" (UID: \"b24eac48-5262-426b-9b2f-c5c56fc3732b\") " pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:00.282855 kubelet[3183]: E1216 13:08:00.282697 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.282855 kubelet[3183]: W1216 13:08:00.282703 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.282855 kubelet[3183]: E1216 13:08:00.282713 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.283286 kubelet[3183]: E1216 13:08:00.283189 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.283286 kubelet[3183]: W1216 13:08:00.283202 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.283286 kubelet[3183]: E1216 13:08:00.283213 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.283788 kubelet[3183]: E1216 13:08:00.283532 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.283788 kubelet[3183]: W1216 13:08:00.283540 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.283788 kubelet[3183]: E1216 13:08:00.283551 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.284020 kubelet[3183]: E1216 13:08:00.284007 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.284166 kubelet[3183]: W1216 13:08:00.284022 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.284166 kubelet[3183]: E1216 13:08:00.284034 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.284449 kubelet[3183]: E1216 13:08:00.284434 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.284492 kubelet[3183]: W1216 13:08:00.284449 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.284492 kubelet[3183]: E1216 13:08:00.284461 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.284492 kubelet[3183]: I1216 13:08:00.284481 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzrf\" (UniqueName: \"kubernetes.io/projected/b24eac48-5262-426b-9b2f-c5c56fc3732b-kube-api-access-5rzrf\") pod \"csi-node-driver-bv5h8\" (UID: \"b24eac48-5262-426b-9b2f-c5c56fc3732b\") " pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:00.285039 kubelet[3183]: E1216 13:08:00.285015 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.285497 kubelet[3183]: W1216 13:08:00.285342 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.285497 kubelet[3183]: E1216 13:08:00.285365 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.286455 kubelet[3183]: E1216 13:08:00.286326 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.286455 kubelet[3183]: W1216 13:08:00.286338 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.286730 kubelet[3183]: E1216 13:08:00.286534 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.286895 kubelet[3183]: E1216 13:08:00.286778 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.286895 kubelet[3183]: W1216 13:08:00.286786 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.286895 kubelet[3183]: E1216 13:08:00.286796 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.287241 kubelet[3183]: E1216 13:08:00.287221 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.287241 kubelet[3183]: W1216 13:08:00.287230 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.287456 kubelet[3183]: E1216 13:08:00.287373 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.287689 kubelet[3183]: E1216 13:08:00.287577 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.287782 kubelet[3183]: W1216 13:08:00.287731 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.287782 kubelet[3183]: E1216 13:08:00.287743 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.318501 containerd[1739]: time="2025-12-16T13:08:00.318191566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4266q,Uid:1bb934ad-05f4-4924-a57e-f48ae976bff0,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:00.380589 containerd[1739]: time="2025-12-16T13:08:00.380501358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d89ff549-p87fm,Uid:dadfd034-7fdd-497c-a17f-9033ed70ccd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e\"" Dec 16 13:08:00.384135 containerd[1739]: time="2025-12-16T13:08:00.384106214Z" level=info msg="connecting to shim d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814" address="unix:///run/containerd/s/8ced4f27058815396418a96a839234adf3818668ee0b7ac9a7d092e80be969b4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:00.385787 containerd[1739]: time="2025-12-16T13:08:00.384919817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:08:00.388188 kubelet[3183]: E1216 13:08:00.388166 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.388188 kubelet[3183]: W1216 13:08:00.388183 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.388332 kubelet[3183]: E1216 13:08:00.388199 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.388554 kubelet[3183]: E1216 13:08:00.388540 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.388554 kubelet[3183]: W1216 13:08:00.388553 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.388624 kubelet[3183]: E1216 13:08:00.388566 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.388877 kubelet[3183]: E1216 13:08:00.388863 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.388877 kubelet[3183]: W1216 13:08:00.388874 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.389046 kubelet[3183]: E1216 13:08:00.388886 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.389252 kubelet[3183]: E1216 13:08:00.389227 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.389252 kubelet[3183]: W1216 13:08:00.389237 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.389733 kubelet[3183]: E1216 13:08:00.389341 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.389733 kubelet[3183]: E1216 13:08:00.389676 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.389982 kubelet[3183]: W1216 13:08:00.389962 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.390056 kubelet[3183]: E1216 13:08:00.390041 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.390423 kubelet[3183]: E1216 13:08:00.390407 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.390423 kubelet[3183]: W1216 13:08:00.390419 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.390632 kubelet[3183]: E1216 13:08:00.390441 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.390632 kubelet[3183]: E1216 13:08:00.390543 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.390717 kubelet[3183]: W1216 13:08:00.390548 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.390717 kubelet[3183]: E1216 13:08:00.390693 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.391004 kubelet[3183]: E1216 13:08:00.390987 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391004 kubelet[3183]: W1216 13:08:00.390997 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.391115 kubelet[3183]: E1216 13:08:00.391093 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.391269 kubelet[3183]: E1216 13:08:00.391254 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391269 kubelet[3183]: W1216 13:08:00.391265 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.391375 kubelet[3183]: E1216 13:08:00.391336 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.391450 kubelet[3183]: E1216 13:08:00.391410 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391450 kubelet[3183]: W1216 13:08:00.391421 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.391513 kubelet[3183]: E1216 13:08:00.391466 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.391594 kubelet[3183]: E1216 13:08:00.391586 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391703 kubelet[3183]: W1216 13:08:00.391608 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.391703 kubelet[3183]: E1216 13:08:00.391618 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.391838 kubelet[3183]: E1216 13:08:00.391736 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391838 kubelet[3183]: W1216 13:08:00.391742 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.391897 kubelet[3183]: E1216 13:08:00.391883 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.391897 kubelet[3183]: W1216 13:08:00.391888 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.392081 kubelet[3183]: E1216 13:08:00.391961 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392081 kubelet[3183]: E1216 13:08:00.392001 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.392081 kubelet[3183]: E1216 13:08:00.392002 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392081 kubelet[3183]: W1216 13:08:00.392006 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.392081 kubelet[3183]: E1216 13:08:00.392012 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392227 kubelet[3183]: E1216 13:08:00.392133 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.392227 kubelet[3183]: W1216 13:08:00.392140 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.392227 kubelet[3183]: E1216 13:08:00.392172 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392410 kubelet[3183]: E1216 13:08:00.392345 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.392410 kubelet[3183]: W1216 13:08:00.392351 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.392410 kubelet[3183]: E1216 13:08:00.392363 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392613 kubelet[3183]: E1216 13:08:00.392599 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.392613 kubelet[3183]: W1216 13:08:00.392610 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.392731 kubelet[3183]: E1216 13:08:00.392710 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.392962 kubelet[3183]: E1216 13:08:00.392940 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.392962 kubelet[3183]: W1216 13:08:00.392951 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.393325 kubelet[3183]: E1216 13:08:00.393271 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.393370 kubelet[3183]: E1216 13:08:00.393365 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.393403 kubelet[3183]: W1216 13:08:00.393397 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.393819 kubelet[3183]: E1216 13:08:00.393771 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.394004 kubelet[3183]: E1216 13:08:00.393904 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.394004 kubelet[3183]: W1216 13:08:00.393910 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.394004 kubelet[3183]: E1216 13:08:00.393982 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.394124 kubelet[3183]: E1216 13:08:00.394109 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.394124 kubelet[3183]: W1216 13:08:00.394116 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.394859 kubelet[3183]: E1216 13:08:00.394256 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.395223 kubelet[3183]: E1216 13:08:00.394980 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.395223 kubelet[3183]: W1216 13:08:00.394994 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.395223 kubelet[3183]: E1216 13:08:00.395010 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.395484 kubelet[3183]: E1216 13:08:00.395474 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.395555 kubelet[3183]: W1216 13:08:00.395546 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.395634 kubelet[3183]: E1216 13:08:00.395625 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.396134 kubelet[3183]: E1216 13:08:00.396086 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.396291 kubelet[3183]: W1216 13:08:00.396215 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.396291 kubelet[3183]: E1216 13:08:00.396237 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.396805 kubelet[3183]: E1216 13:08:00.396750 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.396805 kubelet[3183]: W1216 13:08:00.396767 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.396805 kubelet[3183]: E1216 13:08:00.396779 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.408782 kubelet[3183]: E1216 13:08:00.408770 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:00.408887 kubelet[3183]: W1216 13:08:00.408877 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:00.408944 kubelet[3183]: E1216 13:08:00.408935 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:00.422829 systemd[1]: Started cri-containerd-d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814.scope - libcontainer container d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814. Dec 16 13:08:00.456000 containerd[1739]: time="2025-12-16T13:08:00.455975361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4266q,Uid:1bb934ad-05f4-4924-a57e-f48ae976bff0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\"" Dec 16 13:08:01.980295 kubelet[3183]: E1216 13:08:01.980258 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:02.035082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914738656.mount: Deactivated successfully. Dec 16 13:08:02.529939 containerd[1739]: time="2025-12-16T13:08:02.529899956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:02.532707 containerd[1739]: time="2025-12-16T13:08:02.532680279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:08:02.536450 containerd[1739]: time="2025-12-16T13:08:02.536406033Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:02.542092 containerd[1739]: time="2025-12-16T13:08:02.542047552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:02.542520 containerd[1739]: time="2025-12-16T13:08:02.542383261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.157433668s" Dec 16 13:08:02.542520 containerd[1739]: time="2025-12-16T13:08:02.542411387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:08:02.543715 containerd[1739]: time="2025-12-16T13:08:02.543692343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:08:02.556687 containerd[1739]: time="2025-12-16T13:08:02.555584986Z" level=info msg="CreateContainer within sandbox \"948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:08:02.572806 containerd[1739]: time="2025-12-16T13:08:02.572777512Z" level=info msg="Container 0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:02.591620 containerd[1739]: time="2025-12-16T13:08:02.591594317Z" level=info msg="CreateContainer within sandbox \"948e8fbb9c7e9dccbaafbe847dbc5bfe0874647d7f760b397649503a2e44ab9e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42\"" Dec 16 13:08:02.592774 containerd[1739]: time="2025-12-16T13:08:02.592748694Z" level=info msg="StartContainer for \"0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42\"" Dec 16 13:08:02.594303 containerd[1739]: time="2025-12-16T13:08:02.594275952Z" level=info msg="connecting to shim 0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42" address="unix:///run/containerd/s/6d178d1e4dd0479aeae5b1020fb4759eb6203a2b893204475fa224810c012c06" protocol=ttrpc version=3 Dec 16 13:08:02.613805 systemd[1]: Started cri-containerd-0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42.scope - libcontainer container 0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42. Dec 16 13:08:02.659573 containerd[1739]: time="2025-12-16T13:08:02.659479436Z" level=info msg="StartContainer for \"0757e91b4fe5cbb7248621bac7cae5975396800ec0f169cb4e1ac007a4029e42\" returns successfully" Dec 16 13:08:03.090151 kubelet[3183]: E1216 13:08:03.090122 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.090151 kubelet[3183]: W1216 13:08:03.090146 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090165 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090414 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.091916 kubelet[3183]: W1216 13:08:03.090421 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090432 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090532 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.091916 kubelet[3183]: W1216 13:08:03.090537 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090544 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090705 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.091916 kubelet[3183]: W1216 13:08:03.090711 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.091916 kubelet[3183]: E1216 13:08:03.090719 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.090829 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092193 kubelet[3183]: W1216 13:08:03.090834 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.090841 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.090930 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092193 kubelet[3183]: W1216 13:08:03.090935 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.090941 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.091023 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092193 kubelet[3183]: W1216 13:08:03.091028 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.091034 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092193 kubelet[3183]: E1216 13:08:03.091119 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092485 kubelet[3183]: W1216 13:08:03.091124 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091130 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091218 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092485 kubelet[3183]: W1216 13:08:03.091222 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091228 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091307 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092485 kubelet[3183]: W1216 13:08:03.091311 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091317 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092485 kubelet[3183]: E1216 13:08:03.091426 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092485 kubelet[3183]: W1216 13:08:03.091432 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.091438 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.091902 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092795 kubelet[3183]: W1216 13:08:03.091913 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.091925 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.092312 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092795 kubelet[3183]: W1216 13:08:03.092320 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.092331 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.092571 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.092795 kubelet[3183]: W1216 13:08:03.092577 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.092795 kubelet[3183]: E1216 13:08:03.092585 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.093105 kubelet[3183]: E1216 13:08:03.092871 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.093105 kubelet[3183]: W1216 13:08:03.092877 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.093105 kubelet[3183]: E1216 13:08:03.092885 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.107546 kubelet[3183]: E1216 13:08:03.107521 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.107546 kubelet[3183]: W1216 13:08:03.107540 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.107721 kubelet[3183]: E1216 13:08:03.107553 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.107748 kubelet[3183]: E1216 13:08:03.107723 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.107748 kubelet[3183]: W1216 13:08:03.107739 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.107797 kubelet[3183]: E1216 13:08:03.107755 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.107923 kubelet[3183]: E1216 13:08:03.107912 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.107923 kubelet[3183]: W1216 13:08:03.107920 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.107971 kubelet[3183]: E1216 13:08:03.107935 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.108284 kubelet[3183]: E1216 13:08:03.108128 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.108284 kubelet[3183]: W1216 13:08:03.108136 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.108284 kubelet[3183]: E1216 13:08:03.108204 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.108451 kubelet[3183]: E1216 13:08:03.108431 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.108848 kubelet[3183]: W1216 13:08:03.108586 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.108848 kubelet[3183]: E1216 13:08:03.108600 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.108848 kubelet[3183]: E1216 13:08:03.108723 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.108848 kubelet[3183]: W1216 13:08:03.108728 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.108848 kubelet[3183]: E1216 13:08:03.108735 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.109088 kubelet[3183]: E1216 13:08:03.109073 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.109088 kubelet[3183]: W1216 13:08:03.109084 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.109154 kubelet[3183]: E1216 13:08:03.109094 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.109594 kubelet[3183]: E1216 13:08:03.109542 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.109594 kubelet[3183]: W1216 13:08:03.109579 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.109716 kubelet[3183]: E1216 13:08:03.109650 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110029 kubelet[3183]: E1216 13:08:03.110017 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110029 kubelet[3183]: W1216 13:08:03.110028 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110121 kubelet[3183]: E1216 13:08:03.110116 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110161 kubelet[3183]: E1216 13:08:03.110156 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110188 kubelet[3183]: W1216 13:08:03.110161 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110188 kubelet[3183]: E1216 13:08:03.110175 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110328 kubelet[3183]: E1216 13:08:03.110303 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110328 kubelet[3183]: W1216 13:08:03.110324 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110384 kubelet[3183]: E1216 13:08:03.110339 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110488 kubelet[3183]: E1216 13:08:03.110460 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110488 kubelet[3183]: W1216 13:08:03.110484 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110564 kubelet[3183]: E1216 13:08:03.110495 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110650 kubelet[3183]: E1216 13:08:03.110634 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110650 kubelet[3183]: W1216 13:08:03.110648 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110727 kubelet[3183]: E1216 13:08:03.110657 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.110920 kubelet[3183]: E1216 13:08:03.110894 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.110920 kubelet[3183]: W1216 13:08:03.110917 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.110990 kubelet[3183]: E1216 13:08:03.110928 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.111034 kubelet[3183]: E1216 13:08:03.111023 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.111034 kubelet[3183]: W1216 13:08:03.111032 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.111082 kubelet[3183]: E1216 13:08:03.111038 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.111205 kubelet[3183]: E1216 13:08:03.111188 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.111205 kubelet[3183]: W1216 13:08:03.111203 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.111257 kubelet[3183]: E1216 13:08:03.111216 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.111451 kubelet[3183]: E1216 13:08:03.111440 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.111451 kubelet[3183]: W1216 13:08:03.111447 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.111507 kubelet[3183]: E1216 13:08:03.111456 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.111588 kubelet[3183]: E1216 13:08:03.111577 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:03.111588 kubelet[3183]: W1216 13:08:03.111586 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:03.111637 kubelet[3183]: E1216 13:08:03.111592 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:03.980721 kubelet[3183]: E1216 13:08:03.980422 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:04.055307 containerd[1739]: time="2025-12-16T13:08:04.055267313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:04.062700 kubelet[3183]: I1216 13:08:04.062675 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:04.069009 containerd[1739]: time="2025-12-16T13:08:04.068883942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:08:04.076200 containerd[1739]: time="2025-12-16T13:08:04.076175631Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:04.084676 containerd[1739]: time="2025-12-16T13:08:04.084632159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:04.085086 containerd[1739]: time="2025-12-16T13:08:04.085044481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.541228906s" Dec 16 13:08:04.085121 containerd[1739]: time="2025-12-16T13:08:04.085088405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:08:04.086771 containerd[1739]: time="2025-12-16T13:08:04.086686803Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:08:04.100190 kubelet[3183]: E1216 13:08:04.100168 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100190 kubelet[3183]: W1216 13:08:04.100187 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100207 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100317 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100585 kubelet[3183]: W1216 13:08:04.100323 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100331 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100423 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100585 kubelet[3183]: W1216 13:08:04.100429 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100435 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100521 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100585 kubelet[3183]: W1216 13:08:04.100528 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100585 kubelet[3183]: E1216 13:08:04.100534 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100654 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100909 kubelet[3183]: W1216 13:08:04.100678 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100686 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100798 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100909 kubelet[3183]: W1216 13:08:04.100803 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100810 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100890 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.100909 kubelet[3183]: W1216 13:08:04.100895 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.100909 kubelet[3183]: E1216 13:08:04.100901 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101171 kubelet[3183]: E1216 13:08:04.100983 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101171 kubelet[3183]: W1216 13:08:04.100988 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101171 kubelet[3183]: E1216 13:08:04.100994 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101171 kubelet[3183]: E1216 13:08:04.101086 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101171 kubelet[3183]: W1216 13:08:04.101091 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101171 kubelet[3183]: E1216 13:08:04.101097 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101329 kubelet[3183]: E1216 13:08:04.101301 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101329 kubelet[3183]: W1216 13:08:04.101309 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101329 kubelet[3183]: E1216 13:08:04.101315 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101429 kubelet[3183]: E1216 13:08:04.101410 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101429 kubelet[3183]: W1216 13:08:04.101415 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101429 kubelet[3183]: E1216 13:08:04.101421 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101515 kubelet[3183]: E1216 13:08:04.101504 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101515 kubelet[3183]: W1216 13:08:04.101509 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101570 kubelet[3183]: E1216 13:08:04.101515 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.101597 kubelet[3183]: E1216 13:08:04.101591 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.101597 kubelet[3183]: W1216 13:08:04.101595 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.101659 kubelet[3183]: E1216 13:08:04.101601 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.103472 kubelet[3183]: E1216 13:08:04.101758 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.103472 kubelet[3183]: W1216 13:08:04.101765 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.103472 kubelet[3183]: E1216 13:08:04.101772 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.103472 kubelet[3183]: E1216 13:08:04.101880 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.103472 kubelet[3183]: W1216 13:08:04.101889 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.103472 kubelet[3183]: E1216 13:08:04.101912 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.106687 containerd[1739]: time="2025-12-16T13:08:04.106327251Z" level=info msg="Container 0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:04.115257 kubelet[3183]: E1216 13:08:04.115239 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.115257 kubelet[3183]: W1216 13:08:04.115254 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.115368 kubelet[3183]: E1216 13:08:04.115267 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.115423 kubelet[3183]: E1216 13:08:04.115397 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.115423 kubelet[3183]: W1216 13:08:04.115403 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.115423 kubelet[3183]: E1216 13:08:04.115412 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.115586 kubelet[3183]: E1216 13:08:04.115573 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.115586 kubelet[3183]: W1216 13:08:04.115584 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.115653 kubelet[3183]: E1216 13:08:04.115595 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.115727 kubelet[3183]: E1216 13:08:04.115716 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.115727 kubelet[3183]: W1216 13:08:04.115725 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.115803 kubelet[3183]: E1216 13:08:04.115736 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.115837 kubelet[3183]: E1216 13:08:04.115827 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.115837 kubelet[3183]: W1216 13:08:04.115832 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.115880 kubelet[3183]: E1216 13:08:04.115846 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116012 kubelet[3183]: E1216 13:08:04.115994 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116012 kubelet[3183]: W1216 13:08:04.116002 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116061 kubelet[3183]: E1216 13:08:04.116017 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116273 kubelet[3183]: E1216 13:08:04.116261 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116273 kubelet[3183]: W1216 13:08:04.116270 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116326 kubelet[3183]: E1216 13:08:04.116280 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116509 kubelet[3183]: E1216 13:08:04.116491 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116509 kubelet[3183]: W1216 13:08:04.116506 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116570 kubelet[3183]: E1216 13:08:04.116522 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116655 kubelet[3183]: E1216 13:08:04.116641 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116655 kubelet[3183]: W1216 13:08:04.116652 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116755 kubelet[3183]: E1216 13:08:04.116741 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116789 kubelet[3183]: E1216 13:08:04.116774 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116789 kubelet[3183]: W1216 13:08:04.116779 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116884 kubelet[3183]: E1216 13:08:04.116873 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.116918 kubelet[3183]: E1216 13:08:04.116909 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.116918 kubelet[3183]: W1216 13:08:04.116915 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.116964 kubelet[3183]: E1216 13:08:04.116923 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.117025 kubelet[3183]: E1216 13:08:04.117016 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.117025 kubelet[3183]: W1216 13:08:04.117023 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.117069 kubelet[3183]: E1216 13:08:04.117036 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.117200 kubelet[3183]: E1216 13:08:04.117185 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.117200 kubelet[3183]: W1216 13:08:04.117198 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.117247 kubelet[3183]: E1216 13:08:04.117210 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.117333 kubelet[3183]: E1216 13:08:04.117323 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.117333 kubelet[3183]: W1216 13:08:04.117330 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.117380 kubelet[3183]: E1216 13:08:04.117337 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.117702 kubelet[3183]: E1216 13:08:04.117683 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.117702 kubelet[3183]: W1216 13:08:04.117701 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.117785 kubelet[3183]: E1216 13:08:04.117718 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.117925 kubelet[3183]: E1216 13:08:04.117915 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.117925 kubelet[3183]: W1216 13:08:04.117923 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.117969 kubelet[3183]: E1216 13:08:04.117938 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.118148 kubelet[3183]: E1216 13:08:04.118138 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.118148 kubelet[3183]: W1216 13:08:04.118145 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.118197 kubelet[3183]: E1216 13:08:04.118154 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.118311 kubelet[3183]: E1216 13:08:04.118284 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:04.118334 kubelet[3183]: W1216 13:08:04.118312 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:04.118334 kubelet[3183]: E1216 13:08:04.118319 3183 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:04.123124 containerd[1739]: time="2025-12-16T13:08:04.123100900Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14\"" Dec 16 13:08:04.123687 containerd[1739]: time="2025-12-16T13:08:04.123535398Z" level=info msg="StartContainer for \"0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14\"" Dec 16 13:08:04.125142 containerd[1739]: time="2025-12-16T13:08:04.125117530Z" level=info msg="connecting to shim 0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14" address="unix:///run/containerd/s/8ced4f27058815396418a96a839234adf3818668ee0b7ac9a7d092e80be969b4" protocol=ttrpc version=3 Dec 16 13:08:04.147953 systemd[1]: Started cri-containerd-0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14.scope - libcontainer container 0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14. Dec 16 13:08:04.223591 containerd[1739]: time="2025-12-16T13:08:04.223517793Z" level=info msg="StartContainer for \"0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14\" returns successfully" Dec 16 13:08:04.223776 systemd[1]: cri-containerd-0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14.scope: Deactivated successfully. Dec 16 13:08:04.228383 containerd[1739]: time="2025-12-16T13:08:04.228353682Z" level=info msg="received container exit event container_id:\"0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14\" id:\"0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14\" pid:3886 exited_at:{seconds:1765890484 nanos:227988893}" Dec 16 13:08:04.246354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0162c740f581a3e732e0301ae494b1fbcfb49e75d1fea1ae33131fae98ce7c14-rootfs.mount: Deactivated successfully. Dec 16 13:08:05.110998 kubelet[3183]: I1216 13:08:05.110944 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69d89ff549-p87fm" podStartSLOduration=3.951911831 podStartE2EDuration="6.110927214s" podCreationTimestamp="2025-12-16 13:07:59 +0000 UTC" firstStartedPulling="2025-12-16 13:08:00.384168158 +0000 UTC m=+17.494848632" lastFinishedPulling="2025-12-16 13:08:02.543183546 +0000 UTC m=+19.653864015" observedRunningTime="2025-12-16 13:08:03.07212123 +0000 UTC m=+20.182801706" watchObservedRunningTime="2025-12-16 13:08:05.110927214 +0000 UTC m=+22.221607687" Dec 16 13:08:05.980154 kubelet[3183]: E1216 13:08:05.980097 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:07.070932 containerd[1739]: time="2025-12-16T13:08:07.070852199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:08:07.960479 kubelet[3183]: I1216 13:08:07.960367 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:07.979819 kubelet[3183]: E1216 13:08:07.979789 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:09.805026 containerd[1739]: time="2025-12-16T13:08:09.804986455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:09.808031 containerd[1739]: time="2025-12-16T13:08:09.807994649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:08:09.810972 containerd[1739]: time="2025-12-16T13:08:09.810931672Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:09.815789 containerd[1739]: time="2025-12-16T13:08:09.815758263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:09.816264 containerd[1739]: time="2025-12-16T13:08:09.816241831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.745354061s" Dec 16 13:08:09.816311 containerd[1739]: time="2025-12-16T13:08:09.816263254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:08:09.817988 containerd[1739]: time="2025-12-16T13:08:09.817956107Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:08:09.838213 containerd[1739]: time="2025-12-16T13:08:09.837327598Z" level=info msg="Container f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:09.854183 containerd[1739]: time="2025-12-16T13:08:09.854157354Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f\"" Dec 16 13:08:09.854591 containerd[1739]: time="2025-12-16T13:08:09.854525795Z" level=info msg="StartContainer for \"f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f\"" Dec 16 13:08:09.856198 containerd[1739]: time="2025-12-16T13:08:09.856163120Z" level=info msg="connecting to shim f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f" address="unix:///run/containerd/s/8ced4f27058815396418a96a839234adf3818668ee0b7ac9a7d092e80be969b4" protocol=ttrpc version=3 Dec 16 13:08:09.878809 systemd[1]: Started cri-containerd-f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f.scope - libcontainer container f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f. Dec 16 13:08:09.950775 containerd[1739]: time="2025-12-16T13:08:09.950731760Z" level=info msg="StartContainer for \"f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f\" returns successfully" Dec 16 13:08:09.980214 kubelet[3183]: E1216 13:08:09.980181 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:11.241355 systemd[1]: cri-containerd-f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f.scope: Deactivated successfully. Dec 16 13:08:11.242327 systemd[1]: cri-containerd-f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f.scope: Consumed 404ms CPU time, 191.9M memory peak, 171.3M written to disk. Dec 16 13:08:11.242928 containerd[1739]: time="2025-12-16T13:08:11.242873968Z" level=info msg="received container exit event container_id:\"f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f\" id:\"f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f\" pid:3944 exited_at:{seconds:1765890491 nanos:241873269}" Dec 16 13:08:11.263287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f56bf5c507dd8d7cfa079edb4cf4cb30e8a4720b55eb2037e3f8b530e168718f-rootfs.mount: Deactivated successfully. Dec 16 13:08:11.296232 kubelet[3183]: I1216 13:08:11.296212 3183 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:08:11.337800 systemd[1]: Created slice kubepods-burstable-pod99d97f9f_faa7_4627_a5cf_6dfa6f8affe5.slice - libcontainer container kubepods-burstable-pod99d97f9f_faa7_4627_a5cf_6dfa6f8affe5.slice. Dec 16 13:08:11.357132 systemd[1]: Created slice kubepods-besteffort-pod543c36ea_093c_4498_a84b_c504d49ef8b8.slice - libcontainer container kubepods-besteffort-pod543c36ea_093c_4498_a84b_c504d49ef8b8.slice. Dec 16 13:08:11.366232 systemd[1]: Created slice kubepods-besteffort-pod732c00ab_68ae_445e_a71a_f5b84da1878e.slice - libcontainer container kubepods-besteffort-pod732c00ab_68ae_445e_a71a_f5b84da1878e.slice. Dec 16 13:08:11.368678 kubelet[3183]: I1216 13:08:11.367949 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b113be-574b-47a2-86df-86aede15472d-goldmane-ca-bundle\") pod \"goldmane-666569f655-7dvks\" (UID: \"48b113be-574b-47a2-86df-86aede15472d\") " pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:11.368678 kubelet[3183]: I1216 13:08:11.367986 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99d97f9f-faa7-4627-a5cf-6dfa6f8affe5-config-volume\") pod \"coredns-668d6bf9bc-wkz6f\" (UID: \"99d97f9f-faa7-4627-a5cf-6dfa6f8affe5\") " pod="kube-system/coredns-668d6bf9bc-wkz6f" Dec 16 13:08:11.368678 kubelet[3183]: I1216 13:08:11.368008 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqtf\" (UniqueName: \"kubernetes.io/projected/48b113be-574b-47a2-86df-86aede15472d-kube-api-access-pdqtf\") pod \"goldmane-666569f655-7dvks\" (UID: \"48b113be-574b-47a2-86df-86aede15472d\") " pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:11.368678 kubelet[3183]: I1216 13:08:11.368027 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26wt\" (UniqueName: \"kubernetes.io/projected/732c00ab-68ae-445e-a71a-f5b84da1878e-kube-api-access-m26wt\") pod \"calico-apiserver-5bbcddbc87-qq9w6\" (UID: \"732c00ab-68ae-445e-a71a-f5b84da1878e\") " pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" Dec 16 13:08:11.368678 kubelet[3183]: I1216 13:08:11.368136 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29fc32db-4a73-46de-9d39-e11c06875a97-tigera-ca-bundle\") pod \"calico-kube-controllers-76554f6877-gxtwh\" (UID: \"29fc32db-4a73-46de-9d39-e11c06875a97\") " pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" Dec 16 13:08:11.368869 kubelet[3183]: I1216 13:08:11.368222 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npl9r\" (UniqueName: \"kubernetes.io/projected/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-kube-api-access-npl9r\") pod \"whisker-5998d8cc9c-rqgsc\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " pod="calico-system/whisker-5998d8cc9c-rqgsc" Dec 16 13:08:11.368869 kubelet[3183]: I1216 13:08:11.368244 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1270693e-5f97-4929-9900-78e53066ce6a-config-volume\") pod \"coredns-668d6bf9bc-rgmjl\" (UID: \"1270693e-5f97-4929-9900-78e53066ce6a\") " pod="kube-system/coredns-668d6bf9bc-rgmjl" Dec 16 13:08:11.368869 kubelet[3183]: I1216 13:08:11.368262 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2fj\" (UniqueName: \"kubernetes.io/projected/1270693e-5f97-4929-9900-78e53066ce6a-kube-api-access-tn2fj\") pod \"coredns-668d6bf9bc-rgmjl\" (UID: \"1270693e-5f97-4929-9900-78e53066ce6a\") " pod="kube-system/coredns-668d6bf9bc-rgmjl" Dec 16 13:08:11.368869 kubelet[3183]: I1216 13:08:11.368280 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b113be-574b-47a2-86df-86aede15472d-config\") pod \"goldmane-666569f655-7dvks\" (UID: \"48b113be-574b-47a2-86df-86aede15472d\") " pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:11.368869 kubelet[3183]: I1216 13:08:11.368302 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2zz\" (UniqueName: \"kubernetes.io/projected/99d97f9f-faa7-4627-a5cf-6dfa6f8affe5-kube-api-access-gw2zz\") pod \"coredns-668d6bf9bc-wkz6f\" (UID: \"99d97f9f-faa7-4627-a5cf-6dfa6f8affe5\") " pod="kube-system/coredns-668d6bf9bc-wkz6f" Dec 16 13:08:11.369001 kubelet[3183]: I1216 13:08:11.368323 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/732c00ab-68ae-445e-a71a-f5b84da1878e-calico-apiserver-certs\") pod \"calico-apiserver-5bbcddbc87-qq9w6\" (UID: \"732c00ab-68ae-445e-a71a-f5b84da1878e\") " pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" Dec 16 13:08:11.369001 kubelet[3183]: I1216 13:08:11.368340 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-ca-bundle\") pod \"whisker-5998d8cc9c-rqgsc\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " pod="calico-system/whisker-5998d8cc9c-rqgsc" Dec 16 13:08:11.369001 kubelet[3183]: I1216 13:08:11.368358 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/48b113be-574b-47a2-86df-86aede15472d-goldmane-key-pair\") pod \"goldmane-666569f655-7dvks\" (UID: \"48b113be-574b-47a2-86df-86aede15472d\") " pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:11.369001 kubelet[3183]: I1216 13:08:11.368380 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkkz\" (UniqueName: \"kubernetes.io/projected/543c36ea-093c-4498-a84b-c504d49ef8b8-kube-api-access-rpkkz\") pod \"calico-apiserver-5bbcddbc87-8ttxw\" (UID: \"543c36ea-093c-4498-a84b-c504d49ef8b8\") " pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" Dec 16 13:08:11.369001 kubelet[3183]: I1216 13:08:11.368400 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-backend-key-pair\") pod \"whisker-5998d8cc9c-rqgsc\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " pod="calico-system/whisker-5998d8cc9c-rqgsc" Dec 16 13:08:11.369127 kubelet[3183]: I1216 13:08:11.368422 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qnpl\" (UniqueName: \"kubernetes.io/projected/29fc32db-4a73-46de-9d39-e11c06875a97-kube-api-access-6qnpl\") pod \"calico-kube-controllers-76554f6877-gxtwh\" (UID: \"29fc32db-4a73-46de-9d39-e11c06875a97\") " pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" Dec 16 13:08:11.369127 kubelet[3183]: I1216 13:08:11.368442 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/543c36ea-093c-4498-a84b-c504d49ef8b8-calico-apiserver-certs\") pod \"calico-apiserver-5bbcddbc87-8ttxw\" (UID: \"543c36ea-093c-4498-a84b-c504d49ef8b8\") " pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" Dec 16 13:08:11.374186 systemd[1]: Created slice kubepods-besteffort-poda1e5d092_99f2_468b_8feb_f5f41df7cbc8.slice - libcontainer container kubepods-besteffort-poda1e5d092_99f2_468b_8feb_f5f41df7cbc8.slice. Dec 16 13:08:11.380017 systemd[1]: Created slice kubepods-besteffort-pod48b113be_574b_47a2_86df_86aede15472d.slice - libcontainer container kubepods-besteffort-pod48b113be_574b_47a2_86df_86aede15472d.slice. Dec 16 13:08:11.387582 systemd[1]: Created slice kubepods-burstable-pod1270693e_5f97_4929_9900_78e53066ce6a.slice - libcontainer container kubepods-burstable-pod1270693e_5f97_4929_9900_78e53066ce6a.slice. Dec 16 13:08:11.396325 systemd[1]: Created slice kubepods-besteffort-pod29fc32db_4a73_46de_9d39_e11c06875a97.slice - libcontainer container kubepods-besteffort-pod29fc32db_4a73_46de_9d39_e11c06875a97.slice. Dec 16 13:08:11.688602 containerd[1739]: time="2025-12-16T13:08:11.687289952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkz6f,Uid:99d97f9f-faa7-4627-a5cf-6dfa6f8affe5,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:11.690121 containerd[1739]: time="2025-12-16T13:08:11.689223883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-qq9w6,Uid:732c00ab-68ae-445e-a71a-f5b84da1878e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:11.690671 containerd[1739]: time="2025-12-16T13:08:11.690564258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7dvks,Uid:48b113be-574b-47a2-86df-86aede15472d,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:11.691811 containerd[1739]: time="2025-12-16T13:08:11.687335227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5998d8cc9c-rqgsc,Uid:a1e5d092-99f2-468b-8feb-f5f41df7cbc8,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:11.692041 containerd[1739]: time="2025-12-16T13:08:11.691807173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-8ttxw,Uid:543c36ea-093c-4498-a84b-c504d49ef8b8,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:11.696014 containerd[1739]: time="2025-12-16T13:08:11.695989930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rgmjl,Uid:1270693e-5f97-4929-9900-78e53066ce6a,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:11.699666 containerd[1739]: time="2025-12-16T13:08:11.699643972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76554f6877-gxtwh,Uid:29fc32db-4a73-46de-9d39-e11c06875a97,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:11.984823 systemd[1]: Created slice kubepods-besteffort-podb24eac48_5262_426b_9b2f_c5c56fc3732b.slice - libcontainer container kubepods-besteffort-podb24eac48_5262_426b_9b2f_c5c56fc3732b.slice. Dec 16 13:08:11.990986 containerd[1739]: time="2025-12-16T13:08:11.990941565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bv5h8,Uid:b24eac48-5262-426b-9b2f-c5c56fc3732b,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:12.382740 containerd[1739]: time="2025-12-16T13:08:12.380626403Z" level=error msg="Failed to destroy network for sandbox \"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.383966 systemd[1]: run-netns-cni\x2df95b7d5b\x2d5e21\x2d6d93\x2dea0e\x2dab9649239b08.mount: Deactivated successfully. Dec 16 13:08:12.387159 containerd[1739]: time="2025-12-16T13:08:12.387110994Z" level=error msg="Failed to destroy network for sandbox \"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.390484 systemd[1]: run-netns-cni\x2d02ec5640\x2dba14\x2dfc88\x2d131f\x2dfb6613f21fdc.mount: Deactivated successfully. Dec 16 13:08:12.391270 containerd[1739]: time="2025-12-16T13:08:12.390490189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkz6f,Uid:99d97f9f-faa7-4627-a5cf-6dfa6f8affe5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.391994 kubelet[3183]: E1216 13:08:12.391656 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.391994 kubelet[3183]: E1216 13:08:12.391812 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wkz6f" Dec 16 13:08:12.391994 kubelet[3183]: E1216 13:08:12.391854 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wkz6f" Dec 16 13:08:12.392972 kubelet[3183]: E1216 13:08:12.391928 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wkz6f_kube-system(99d97f9f-faa7-4627-a5cf-6dfa6f8affe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wkz6f_kube-system(99d97f9f-faa7-4627-a5cf-6dfa6f8affe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3e8a1666e156a8eba6da15d3b886a238799c21f4f411a2ae89a3ba5e35b6218\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wkz6f" podUID="99d97f9f-faa7-4627-a5cf-6dfa6f8affe5" Dec 16 13:08:12.396231 containerd[1739]: time="2025-12-16T13:08:12.396077314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rgmjl,Uid:1270693e-5f97-4929-9900-78e53066ce6a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.396773 kubelet[3183]: E1216 13:08:12.396707 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.396951 kubelet[3183]: E1216 13:08:12.396756 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rgmjl" Dec 16 13:08:12.396951 kubelet[3183]: E1216 13:08:12.396867 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rgmjl" Dec 16 13:08:12.396951 kubelet[3183]: E1216 13:08:12.396913 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rgmjl_kube-system(1270693e-5f97-4929-9900-78e53066ce6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rgmjl_kube-system(1270693e-5f97-4929-9900-78e53066ce6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e55d4033c8bdcbb4839ac6e4632fc5b1e2afe2be17ba885f152eb068a375dda3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rgmjl" podUID="1270693e-5f97-4929-9900-78e53066ce6a" Dec 16 13:08:12.430374 containerd[1739]: time="2025-12-16T13:08:12.430338368Z" level=error msg="Failed to destroy network for sandbox \"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.434122 systemd[1]: run-netns-cni\x2d80fb7f0a\x2dc196\x2d2ef3\x2d9c2a\x2dc7e077ea7674.mount: Deactivated successfully. Dec 16 13:08:12.438522 containerd[1739]: time="2025-12-16T13:08:12.438475724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5998d8cc9c-rqgsc,Uid:a1e5d092-99f2-468b-8feb-f5f41df7cbc8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.439958 kubelet[3183]: E1216 13:08:12.439078 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.439958 kubelet[3183]: E1216 13:08:12.439149 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5998d8cc9c-rqgsc" Dec 16 13:08:12.439958 kubelet[3183]: E1216 13:08:12.439172 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5998d8cc9c-rqgsc" Dec 16 13:08:12.440257 kubelet[3183]: E1216 13:08:12.439223 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5998d8cc9c-rqgsc_calico-system(a1e5d092-99f2-468b-8feb-f5f41df7cbc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5998d8cc9c-rqgsc_calico-system(a1e5d092-99f2-468b-8feb-f5f41df7cbc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9f69457ec20fbba6abe3dc19a12975936b94fdc5a13b3be17f404461fb2ea55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5998d8cc9c-rqgsc" podUID="a1e5d092-99f2-468b-8feb-f5f41df7cbc8" Dec 16 13:08:12.471760 containerd[1739]: time="2025-12-16T13:08:12.471721362Z" level=error msg="Failed to destroy network for sandbox \"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.473657 systemd[1]: run-netns-cni\x2d4a925177\x2dddd3\x2ddcc6\x2d9d55\x2da5750bfe380a.mount: Deactivated successfully. Dec 16 13:08:12.477755 containerd[1739]: time="2025-12-16T13:08:12.477727409Z" level=error msg="Failed to destroy network for sandbox \"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.480683 containerd[1739]: time="2025-12-16T13:08:12.480349807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bv5h8,Uid:b24eac48-5262-426b-9b2f-c5c56fc3732b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.481129 kubelet[3183]: E1216 13:08:12.481097 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.481242 kubelet[3183]: E1216 13:08:12.481225 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:12.481307 kubelet[3183]: E1216 13:08:12.481295 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bv5h8" Dec 16 13:08:12.481415 kubelet[3183]: E1216 13:08:12.481397 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"855cdd693fc64671840260b886b30d6789e9d924477a2969ccdc34afab2db651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:12.484641 containerd[1739]: time="2025-12-16T13:08:12.484600857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76554f6877-gxtwh,Uid:29fc32db-4a73-46de-9d39-e11c06875a97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.485085 kubelet[3183]: E1216 13:08:12.484976 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.485085 kubelet[3183]: E1216 13:08:12.485041 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" Dec 16 13:08:12.485244 kubelet[3183]: E1216 13:08:12.485061 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" Dec 16 13:08:12.485439 kubelet[3183]: E1216 13:08:12.485407 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1238d3e345f8d25c1074fd527614563a1f403770ad6a9367c259ca79e41727ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:12.486868 containerd[1739]: time="2025-12-16T13:08:12.486813176Z" level=error msg="Failed to destroy network for sandbox \"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.490469 containerd[1739]: time="2025-12-16T13:08:12.490383901Z" level=error msg="Failed to destroy network for sandbox \"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.492175 containerd[1739]: time="2025-12-16T13:08:12.492147847Z" level=error msg="Failed to destroy network for sandbox \"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.492922 containerd[1739]: time="2025-12-16T13:08:12.492899657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7dvks,Uid:48b113be-574b-47a2-86df-86aede15472d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.493237 kubelet[3183]: E1216 13:08:12.493112 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.493237 kubelet[3183]: E1216 13:08:12.493150 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:12.493237 kubelet[3183]: E1216 13:08:12.493175 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7dvks" Dec 16 13:08:12.493328 kubelet[3183]: E1216 13:08:12.493208 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b5a84b64cdb4121a407236dcc7d598f40fb3d797f370baa7e5f329763f7326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:08:12.496490 containerd[1739]: time="2025-12-16T13:08:12.496427358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-8ttxw,Uid:543c36ea-093c-4498-a84b-c504d49ef8b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.496646 kubelet[3183]: E1216 13:08:12.496623 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.496709 kubelet[3183]: E1216 13:08:12.496658 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" Dec 16 13:08:12.496709 kubelet[3183]: E1216 13:08:12.496701 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" Dec 16 13:08:12.496778 kubelet[3183]: E1216 13:08:12.496741 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"255b4478efeb66d757600ac5915ebd9e18c943e4b6a4a1c840be9e82613d67f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:12.500720 containerd[1739]: time="2025-12-16T13:08:12.500686232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-qq9w6,Uid:732c00ab-68ae-445e-a71a-f5b84da1878e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.500845 kubelet[3183]: E1216 13:08:12.500827 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:12.500845 kubelet[3183]: E1216 13:08:12.500861 3183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" Dec 16 13:08:12.500949 kubelet[3183]: E1216 13:08:12.500879 3183 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" Dec 16 13:08:12.500949 kubelet[3183]: E1216 13:08:12.500915 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2564190bf4c56c98e6d858614e0c67e05c2f180ece210ba5d4ed31f4048a074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:13.092836 containerd[1739]: time="2025-12-16T13:08:13.092785803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:08:13.263597 systemd[1]: run-netns-cni\x2dfeaaa3d3\x2d5af1\x2d857a\x2da9c8\x2d2ed72b755907.mount: Deactivated successfully. Dec 16 13:08:13.263702 systemd[1]: run-netns-cni\x2de0496f1f\x2daf7e\x2d0aa0\x2d8949\x2d1d214bd1c82c.mount: Deactivated successfully. Dec 16 13:08:13.263748 systemd[1]: run-netns-cni\x2d08896172\x2d5f47\x2d954d\x2d27a5\x2d0f2b0a7a51b3.mount: Deactivated successfully. Dec 16 13:08:13.263788 systemd[1]: run-netns-cni\x2dfcd00978\x2d57f8\x2deae1\x2d838d\x2d638fafa81ac7.mount: Deactivated successfully. Dec 16 13:08:17.410477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246293954.mount: Deactivated successfully. Dec 16 13:08:17.438169 containerd[1739]: time="2025-12-16T13:08:17.438127574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:17.440700 containerd[1739]: time="2025-12-16T13:08:17.440675213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:08:17.444063 containerd[1739]: time="2025-12-16T13:08:17.444017685Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:17.448026 containerd[1739]: time="2025-12-16T13:08:17.447985000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:17.448429 containerd[1739]: time="2025-12-16T13:08:17.448272733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.355227879s" Dec 16 13:08:17.448429 containerd[1739]: time="2025-12-16T13:08:17.448302177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:08:17.459647 containerd[1739]: time="2025-12-16T13:08:17.459588688Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:08:17.486728 containerd[1739]: time="2025-12-16T13:08:17.485878899Z" level=info msg="Container 54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:17.504714 containerd[1739]: time="2025-12-16T13:08:17.504689177Z" level=info msg="CreateContainer within sandbox \"d5bf419150868c6f3ed17d18234bce0feca63ca9f8bdf27158d5c265d12ff814\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4\"" Dec 16 13:08:17.505694 containerd[1739]: time="2025-12-16T13:08:17.505296037Z" level=info msg="StartContainer for \"54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4\"" Dec 16 13:08:17.506760 containerd[1739]: time="2025-12-16T13:08:17.506736117Z" level=info msg="connecting to shim 54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4" address="unix:///run/containerd/s/8ced4f27058815396418a96a839234adf3818668ee0b7ac9a7d092e80be969b4" protocol=ttrpc version=3 Dec 16 13:08:17.529798 systemd[1]: Started cri-containerd-54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4.scope - libcontainer container 54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4. Dec 16 13:08:17.599555 containerd[1739]: time="2025-12-16T13:08:17.599461136Z" level=info msg="StartContainer for \"54210470b28a49593da24eb1113d02fe762d240ec5a0a5b7d7c50f99e5dd9af4\" returns successfully" Dec 16 13:08:17.996539 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:08:17.996651 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:08:18.216479 kubelet[3183]: I1216 13:08:18.215706 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-ca-bundle\") pod \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " Dec 16 13:08:18.216479 kubelet[3183]: I1216 13:08:18.216023 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a1e5d092-99f2-468b-8feb-f5f41df7cbc8" (UID: "a1e5d092-99f2-468b-8feb-f5f41df7cbc8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:08:18.216479 kubelet[3183]: I1216 13:08:18.216074 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-backend-key-pair\") pod \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " Dec 16 13:08:18.216479 kubelet[3183]: I1216 13:08:18.216350 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npl9r\" (UniqueName: \"kubernetes.io/projected/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-kube-api-access-npl9r\") pod \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\" (UID: \"a1e5d092-99f2-468b-8feb-f5f41df7cbc8\") " Dec 16 13:08:18.216916 kubelet[3183]: I1216 13:08:18.216517 3183 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-ca-bundle\") on node \"ci-4459.2.2-a-ace8908665\" DevicePath \"\"" Dec 16 13:08:18.230450 kubelet[3183]: I1216 13:08:18.230408 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a1e5d092-99f2-468b-8feb-f5f41df7cbc8" (UID: "a1e5d092-99f2-468b-8feb-f5f41df7cbc8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:08:18.232468 kubelet[3183]: I1216 13:08:18.230599 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-kube-api-access-npl9r" (OuterVolumeSpecName: "kube-api-access-npl9r") pod "a1e5d092-99f2-468b-8feb-f5f41df7cbc8" (UID: "a1e5d092-99f2-468b-8feb-f5f41df7cbc8"). InnerVolumeSpecName "kube-api-access-npl9r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:18.317585 kubelet[3183]: I1216 13:08:18.317477 3183 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-npl9r\" (UniqueName: \"kubernetes.io/projected/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-kube-api-access-npl9r\") on node \"ci-4459.2.2-a-ace8908665\" DevicePath \"\"" Dec 16 13:08:18.317585 kubelet[3183]: I1216 13:08:18.317506 3183 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1e5d092-99f2-468b-8feb-f5f41df7cbc8-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-ace8908665\" DevicePath \"\"" Dec 16 13:08:18.409713 systemd[1]: var-lib-kubelet-pods-a1e5d092\x2d99f2\x2d468b\x2d8feb\x2df5f41df7cbc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnpl9r.mount: Deactivated successfully. Dec 16 13:08:18.409829 systemd[1]: var-lib-kubelet-pods-a1e5d092\x2d99f2\x2d468b\x2d8feb\x2df5f41df7cbc8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:08:18.984236 systemd[1]: Removed slice kubepods-besteffort-poda1e5d092_99f2_468b_8feb_f5f41df7cbc8.slice - libcontainer container kubepods-besteffort-poda1e5d092_99f2_468b_8feb_f5f41df7cbc8.slice. Dec 16 13:08:19.120134 kubelet[3183]: I1216 13:08:19.120077 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4266q" podStartSLOduration=3.127946214 podStartE2EDuration="20.120049854s" podCreationTimestamp="2025-12-16 13:07:59 +0000 UTC" firstStartedPulling="2025-12-16 13:08:00.456750467 +0000 UTC m=+17.567430934" lastFinishedPulling="2025-12-16 13:08:17.4488541 +0000 UTC m=+34.559534574" observedRunningTime="2025-12-16 13:08:18.160506641 +0000 UTC m=+35.271187139" watchObservedRunningTime="2025-12-16 13:08:19.120049854 +0000 UTC m=+36.230730328" Dec 16 13:08:19.168866 systemd[1]: Created slice kubepods-besteffort-podd909e58a_0385_4774_8fd8_0e43ade4f95f.slice - libcontainer container kubepods-besteffort-podd909e58a_0385_4774_8fd8_0e43ade4f95f.slice. Dec 16 13:08:19.222048 kubelet[3183]: I1216 13:08:19.221962 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d909e58a-0385-4774-8fd8-0e43ade4f95f-whisker-ca-bundle\") pod \"whisker-555f4cfd69-tjs7n\" (UID: \"d909e58a-0385-4774-8fd8-0e43ade4f95f\") " pod="calico-system/whisker-555f4cfd69-tjs7n" Dec 16 13:08:19.222048 kubelet[3183]: I1216 13:08:19.222018 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8sfw\" (UniqueName: \"kubernetes.io/projected/d909e58a-0385-4774-8fd8-0e43ade4f95f-kube-api-access-v8sfw\") pod \"whisker-555f4cfd69-tjs7n\" (UID: \"d909e58a-0385-4774-8fd8-0e43ade4f95f\") " pod="calico-system/whisker-555f4cfd69-tjs7n" Dec 16 13:08:19.222048 kubelet[3183]: I1216 13:08:19.222040 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d909e58a-0385-4774-8fd8-0e43ade4f95f-whisker-backend-key-pair\") pod \"whisker-555f4cfd69-tjs7n\" (UID: \"d909e58a-0385-4774-8fd8-0e43ade4f95f\") " pod="calico-system/whisker-555f4cfd69-tjs7n" Dec 16 13:08:19.475206 containerd[1739]: time="2025-12-16T13:08:19.474888052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555f4cfd69-tjs7n,Uid:d909e58a-0385-4774-8fd8-0e43ade4f95f,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:19.634381 systemd-networkd[1352]: cali5c26334ba2d: Link UP Dec 16 13:08:19.636955 systemd-networkd[1352]: cali5c26334ba2d: Gained carrier Dec 16 13:08:19.677089 containerd[1739]: 2025-12-16 13:08:19.527 [INFO][4360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:08:19.677089 containerd[1739]: 2025-12-16 13:08:19.539 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0 whisker-555f4cfd69- calico-system d909e58a-0385-4774-8fd8-0e43ade4f95f 902 0 2025-12-16 13:08:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:555f4cfd69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 whisker-555f4cfd69-tjs7n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5c26334ba2d [] [] }} ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-" Dec 16 13:08:19.677089 containerd[1739]: 2025-12-16 13:08:19.540 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.677089 containerd[1739]: 2025-12-16 13:08:19.586 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" HandleID="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Workload="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.587 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" HandleID="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Workload="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"whisker-555f4cfd69-tjs7n", "timestamp":"2025-12-16 13:08:19.586258342 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.587 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.587 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.587 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.594 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.597 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.602 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.603 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.677880 containerd[1739]: 2025-12-16 13:08:19.606 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.606 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.607 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777 Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.616 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.621 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.65/26] block=192.168.95.64/26 handle="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.621 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.65/26] handle="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.622 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:19.678102 containerd[1739]: 2025-12-16 13:08:19.622 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.65/26] IPv6=[] ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" HandleID="k8s-pod-network.2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Workload="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.678257 containerd[1739]: 2025-12-16 13:08:19.627 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0", GenerateName:"whisker-555f4cfd69-", Namespace:"calico-system", SelfLink:"", UID:"d909e58a-0385-4774-8fd8-0e43ade4f95f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555f4cfd69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"whisker-555f4cfd69-tjs7n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c26334ba2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:19.678257 containerd[1739]: 2025-12-16 13:08:19.627 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.65/32] ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.678340 containerd[1739]: 2025-12-16 13:08:19.627 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c26334ba2d ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.678340 containerd[1739]: 2025-12-16 13:08:19.634 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.678386 containerd[1739]: 2025-12-16 13:08:19.634 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0", GenerateName:"whisker-555f4cfd69-", Namespace:"calico-system", SelfLink:"", UID:"d909e58a-0385-4774-8fd8-0e43ade4f95f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555f4cfd69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777", Pod:"whisker-555f4cfd69-tjs7n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c26334ba2d", MAC:"ee:42:a2:70:c9:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:19.678441 containerd[1739]: 2025-12-16 13:08:19.672 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" Namespace="calico-system" Pod="whisker-555f4cfd69-tjs7n" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-whisker--555f4cfd69--tjs7n-eth0" Dec 16 13:08:19.728826 containerd[1739]: time="2025-12-16T13:08:19.725939033Z" level=info msg="connecting to shim 2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777" address="unix:///run/containerd/s/acb953405cc1281e929befcb898ed022df041b81468b3104bddd298af1a8a566" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:19.768833 systemd[1]: Started cri-containerd-2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777.scope - libcontainer container 2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777. Dec 16 13:08:19.820063 containerd[1739]: time="2025-12-16T13:08:19.820025125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555f4cfd69-tjs7n,Uid:d909e58a-0385-4774-8fd8-0e43ade4f95f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2caca10aae63ba119c59b1055b94d38b1675de182e1377dca200dd3ace85a777\"" Dec 16 13:08:19.822626 containerd[1739]: time="2025-12-16T13:08:19.822591153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:20.039981 systemd-networkd[1352]: vxlan.calico: Link UP Dec 16 13:08:20.039989 systemd-networkd[1352]: vxlan.calico: Gained carrier Dec 16 13:08:20.190835 containerd[1739]: time="2025-12-16T13:08:20.190697719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:20.193621 containerd[1739]: time="2025-12-16T13:08:20.193510721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:20.193621 containerd[1739]: time="2025-12-16T13:08:20.193530575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:20.193760 kubelet[3183]: E1216 13:08:20.193726 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:20.193800 kubelet[3183]: E1216 13:08:20.193777 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:20.193919 kubelet[3183]: E1216 13:08:20.193890 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:744f40c00f6d4b9f9afec70a4a976be7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:20.196330 containerd[1739]: time="2025-12-16T13:08:20.196271384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:20.544764 containerd[1739]: time="2025-12-16T13:08:20.544718133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:20.547503 containerd[1739]: time="2025-12-16T13:08:20.547459916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:20.547554 containerd[1739]: time="2025-12-16T13:08:20.547477017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:20.547733 kubelet[3183]: E1216 13:08:20.547700 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:20.548047 kubelet[3183]: E1216 13:08:20.547747 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:20.548088 kubelet[3183]: E1216 13:08:20.547872 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:20.549171 kubelet[3183]: E1216 13:08:20.549061 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:08:20.981974 kubelet[3183]: I1216 13:08:20.981933 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e5d092-99f2-468b-8feb-f5f41df7cbc8" path="/var/lib/kubelet/pods/a1e5d092-99f2-468b-8feb-f5f41df7cbc8/volumes" Dec 16 13:08:21.112185 kubelet[3183]: E1216 13:08:21.112137 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:08:21.306782 systemd-networkd[1352]: vxlan.calico: Gained IPv6LL Dec 16 13:08:21.498767 systemd-networkd[1352]: cali5c26334ba2d: Gained IPv6LL Dec 16 13:08:22.980857 containerd[1739]: time="2025-12-16T13:08:22.980698586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76554f6877-gxtwh,Uid:29fc32db-4a73-46de-9d39-e11c06875a97,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:23.075061 systemd-networkd[1352]: cali5b4e3a1a124: Link UP Dec 16 13:08:23.075370 systemd-networkd[1352]: cali5b4e3a1a124: Gained carrier Dec 16 13:08:23.100413 containerd[1739]: 2025-12-16 13:08:23.020 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0 calico-kube-controllers-76554f6877- calico-system 29fc32db-4a73-46de-9d39-e11c06875a97 839 0 2025-12-16 13:08:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76554f6877 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 calico-kube-controllers-76554f6877-gxtwh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b4e3a1a124 [] [] }} ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-" Dec 16 13:08:23.100413 containerd[1739]: 2025-12-16 13:08:23.020 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.100413 containerd[1739]: 2025-12-16 13:08:23.043 [INFO][4558] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" HandleID="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.043 [INFO][4558] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" HandleID="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac380), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"calico-kube-controllers-76554f6877-gxtwh", "timestamp":"2025-12-16 13:08:23.043156082 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.043 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.043 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.043 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.049 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.053 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.056 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.058 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.100695 containerd[1739]: 2025-12-16 13:08:23.059 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.059 [INFO][4558] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.060 [INFO][4558] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67 Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.065 [INFO][4558] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.070 [INFO][4558] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.66/26] block=192.168.95.64/26 handle="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.070 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.66/26] handle="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.070 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:23.101021 containerd[1739]: 2025-12-16 13:08:23.070 [INFO][4558] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.66/26] IPv6=[] ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" HandleID="k8s-pod-network.85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.101173 containerd[1739]: 2025-12-16 13:08:23.071 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0", GenerateName:"calico-kube-controllers-76554f6877-", Namespace:"calico-system", SelfLink:"", UID:"29fc32db-4a73-46de-9d39-e11c06875a97", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76554f6877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"calico-kube-controllers-76554f6877-gxtwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b4e3a1a124", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:23.101240 containerd[1739]: 2025-12-16 13:08:23.071 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.66/32] ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.101240 containerd[1739]: 2025-12-16 13:08:23.071 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b4e3a1a124 ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.101240 containerd[1739]: 2025-12-16 13:08:23.076 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.101307 containerd[1739]: 2025-12-16 13:08:23.076 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0", GenerateName:"calico-kube-controllers-76554f6877-", Namespace:"calico-system", SelfLink:"", UID:"29fc32db-4a73-46de-9d39-e11c06875a97", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76554f6877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67", Pod:"calico-kube-controllers-76554f6877-gxtwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b4e3a1a124", MAC:"e2:c4:6a:30:67:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:23.101364 containerd[1739]: 2025-12-16 13:08:23.097 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" Namespace="calico-system" Pod="calico-kube-controllers-76554f6877-gxtwh" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--kube--controllers--76554f6877--gxtwh-eth0" Dec 16 13:08:23.144176 containerd[1739]: time="2025-12-16T13:08:23.144090309Z" level=info msg="connecting to shim 85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67" address="unix:///run/containerd/s/028cbca2db7c10393f19feeaded3eb418a2db78fb92246ed9ab08b14e1bc2220" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:23.163792 systemd[1]: Started cri-containerd-85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67.scope - libcontainer container 85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67. Dec 16 13:08:23.210124 containerd[1739]: time="2025-12-16T13:08:23.210070848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76554f6877-gxtwh,Uid:29fc32db-4a73-46de-9d39-e11c06875a97,Namespace:calico-system,Attempt:0,} returns sandbox id \"85a7ad32495b0d786329680b3b44b8ad32dd044078d7cb336beb014d08b94e67\"" Dec 16 13:08:23.213654 containerd[1739]: time="2025-12-16T13:08:23.213586946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:23.579366 containerd[1739]: time="2025-12-16T13:08:23.579319939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:23.582246 containerd[1739]: time="2025-12-16T13:08:23.582220329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:23.582323 containerd[1739]: time="2025-12-16T13:08:23.582238073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:23.582473 kubelet[3183]: E1216 13:08:23.582436 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:23.582838 kubelet[3183]: E1216 13:08:23.582491 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:23.582838 kubelet[3183]: E1216 13:08:23.582630 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qnpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:23.584145 kubelet[3183]: E1216 13:08:23.584106 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:24.115960 kubelet[3183]: E1216 13:08:24.115920 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:24.826841 systemd-networkd[1352]: cali5b4e3a1a124: Gained IPv6LL Dec 16 13:08:24.981164 containerd[1739]: time="2025-12-16T13:08:24.980461097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7dvks,Uid:48b113be-574b-47a2-86df-86aede15472d,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:24.981509 containerd[1739]: time="2025-12-16T13:08:24.981278184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-8ttxw,Uid:543c36ea-093c-4498-a84b-c504d49ef8b8,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:24.981866 containerd[1739]: time="2025-12-16T13:08:24.981722706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkz6f,Uid:99d97f9f-faa7-4627-a5cf-6dfa6f8affe5,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:25.118360 kubelet[3183]: E1216 13:08:25.118272 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:25.130790 systemd-networkd[1352]: calie1fdbd84043: Link UP Dec 16 13:08:25.131115 systemd-networkd[1352]: calie1fdbd84043: Gained carrier Dec 16 13:08:25.150855 containerd[1739]: 2025-12-16 13:08:25.042 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0 coredns-668d6bf9bc- kube-system 99d97f9f-faa7-4627-a5cf-6dfa6f8affe5 828 0 2025-12-16 13:07:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 coredns-668d6bf9bc-wkz6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1fdbd84043 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-" Dec 16 13:08:25.150855 containerd[1739]: 2025-12-16 13:08:25.043 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.150855 containerd[1739]: 2025-12-16 13:08:25.088 [INFO][4654] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" HandleID="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.089 [INFO][4654] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" HandleID="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"coredns-668d6bf9bc-wkz6f", "timestamp":"2025-12-16 13:08:25.088290797 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.089 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.089 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.089 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.095 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.100 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.104 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.105 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151151 containerd[1739]: 2025-12-16 13:08:25.106 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.106 [INFO][4654] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.107 [INFO][4654] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29 Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.114 [INFO][4654] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.120 [INFO][4654] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.67/26] block=192.168.95.64/26 handle="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.120 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.67/26] handle="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.121 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:25.151955 containerd[1739]: 2025-12-16 13:08:25.121 [INFO][4654] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.67/26] IPv6=[] ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" HandleID="k8s-pod-network.01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.124 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"99d97f9f-faa7-4627-a5cf-6dfa6f8affe5", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"coredns-668d6bf9bc-wkz6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1fdbd84043", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.124 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.67/32] ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.124 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1fdbd84043 ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.130 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.131 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"99d97f9f-faa7-4627-a5cf-6dfa6f8affe5", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29", Pod:"coredns-668d6bf9bc-wkz6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1fdbd84043", MAC:"92:6c:f0:a3:57:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.152121 containerd[1739]: 2025-12-16 13:08:25.148 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" Namespace="kube-system" Pod="coredns-668d6bf9bc-wkz6f" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--wkz6f-eth0" Dec 16 13:08:25.219563 containerd[1739]: time="2025-12-16T13:08:25.219496616Z" level=info msg="connecting to shim 01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29" address="unix:///run/containerd/s/88f9def51171bb8d190f5755a48e8483dd3cc21134510f73defac7bcc845595e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:25.257434 systemd-networkd[1352]: calic2fa0502045: Link UP Dec 16 13:08:25.258505 systemd-networkd[1352]: calic2fa0502045: Gained carrier Dec 16 13:08:25.277004 systemd[1]: Started cri-containerd-01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29.scope - libcontainer container 01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29. Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.058 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0 calico-apiserver-5bbcddbc87- calico-apiserver 543c36ea-093c-4498-a84b-c504d49ef8b8 837 0 2025-12-16 13:07:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbcddbc87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 calico-apiserver-5bbcddbc87-8ttxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2fa0502045 [] [] }} ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.059 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.090 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" HandleID="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.090 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" HandleID="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d10f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-ace8908665", "pod":"calico-apiserver-5bbcddbc87-8ttxw", "timestamp":"2025-12-16 13:08:25.09068755 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.091 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.121 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.121 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.196 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.201 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.204 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.205 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.207 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.207 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.209 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63 Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.231 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.242 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.68/26] block=192.168.95.64/26 handle="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.243 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.68/26] handle="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.243 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:25.279356 containerd[1739]: 2025-12-16 13:08:25.244 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.68/26] IPv6=[] ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" HandleID="k8s-pod-network.4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.255 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0", GenerateName:"calico-apiserver-5bbcddbc87-", Namespace:"calico-apiserver", SelfLink:"", UID:"543c36ea-093c-4498-a84b-c504d49ef8b8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbcddbc87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"calico-apiserver-5bbcddbc87-8ttxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2fa0502045", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.255 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.68/32] ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.255 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2fa0502045 ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.258 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.258 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0", GenerateName:"calico-apiserver-5bbcddbc87-", Namespace:"calico-apiserver", SelfLink:"", UID:"543c36ea-093c-4498-a84b-c504d49ef8b8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbcddbc87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63", Pod:"calico-apiserver-5bbcddbc87-8ttxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2fa0502045", MAC:"92:a9:7e:48:de:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.280753 containerd[1739]: 2025-12-16 13:08:25.275 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-8ttxw" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--8ttxw-eth0" Dec 16 13:08:25.329261 containerd[1739]: time="2025-12-16T13:08:25.329222191Z" level=info msg="connecting to shim 4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63" address="unix:///run/containerd/s/e91f70b02d1f566155c16b7f899eda3663e07d1c47beb1bf88103bf9f62f722b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:25.346357 systemd-networkd[1352]: calid19b4f04725: Link UP Dec 16 13:08:25.347421 systemd-networkd[1352]: calid19b4f04725: Gained carrier Dec 16 13:08:25.350679 containerd[1739]: time="2025-12-16T13:08:25.350160270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkz6f,Uid:99d97f9f-faa7-4627-a5cf-6dfa6f8affe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29\"" Dec 16 13:08:25.356708 containerd[1739]: time="2025-12-16T13:08:25.356685787Z" level=info msg="CreateContainer within sandbox \"01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.054 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0 goldmane-666569f655- calico-system 48b113be-574b-47a2-86df-86aede15472d 832 0 2025-12-16 13:07:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 goldmane-666569f655-7dvks eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid19b4f04725 [] [] }} ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.054 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.102 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" HandleID="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Workload="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.102 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" HandleID="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Workload="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"goldmane-666569f655-7dvks", "timestamp":"2025-12-16 13:08:25.102033736 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.102 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.243 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.244 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.297 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.302 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.307 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.310 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.312 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.312 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.313 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.318 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.337 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.69/26] block=192.168.95.64/26 handle="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.337 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.69/26] handle="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.337 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:25.380941 containerd[1739]: 2025-12-16 13:08:25.337 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.69/26] IPv6=[] ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" HandleID="k8s-pod-network.8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Workload="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.342 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"48b113be-574b-47a2-86df-86aede15472d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"goldmane-666569f655-7dvks", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid19b4f04725", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.342 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.69/32] ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.342 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid19b4f04725 ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.349 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.351 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"48b113be-574b-47a2-86df-86aede15472d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc", Pod:"goldmane-666569f655-7dvks", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid19b4f04725", MAC:"52:ea:f4:cc:3b:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:25.381451 containerd[1739]: 2025-12-16 13:08:25.374 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" Namespace="calico-system" Pod="goldmane-666569f655-7dvks" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-goldmane--666569f655--7dvks-eth0" Dec 16 13:08:25.382814 systemd[1]: Started cri-containerd-4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63.scope - libcontainer container 4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63. Dec 16 13:08:25.393843 containerd[1739]: time="2025-12-16T13:08:25.393817238Z" level=info msg="Container bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:25.409195 containerd[1739]: time="2025-12-16T13:08:25.409172052Z" level=info msg="CreateContainer within sandbox \"01def402362094ce629021afc5f9a1996a49880253d52b13f6c393d811236b29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe\"" Dec 16 13:08:25.409685 containerd[1739]: time="2025-12-16T13:08:25.409641828Z" level=info msg="StartContainer for \"bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe\"" Dec 16 13:08:25.411927 containerd[1739]: time="2025-12-16T13:08:25.411879275Z" level=info msg="connecting to shim bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe" address="unix:///run/containerd/s/88f9def51171bb8d190f5755a48e8483dd3cc21134510f73defac7bcc845595e" protocol=ttrpc version=3 Dec 16 13:08:25.437823 systemd[1]: Started cri-containerd-bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe.scope - libcontainer container bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe. Dec 16 13:08:25.449016 containerd[1739]: time="2025-12-16T13:08:25.448542664Z" level=info msg="connecting to shim 8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc" address="unix:///run/containerd/s/b89401bb7cfda19875b57919849751c718bde36fadb69a051c2a28d33e0b9dbf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:25.450612 containerd[1739]: time="2025-12-16T13:08:25.450590050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-8ttxw,Uid:543c36ea-093c-4498-a84b-c504d49ef8b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a83baf0b18b10c4a40f75b4544d65d90482ccd5f8ff0f6dc5e677eb7e3adf63\"" Dec 16 13:08:25.453600 containerd[1739]: time="2025-12-16T13:08:25.453583235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:25.476970 systemd[1]: Started cri-containerd-8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc.scope - libcontainer container 8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc. Dec 16 13:08:25.486689 containerd[1739]: time="2025-12-16T13:08:25.486652165Z" level=info msg="StartContainer for \"bdcc9c3afede8b4385f75f67b00196c4f601217dd2ef4de5e902741523f8e2fe\" returns successfully" Dec 16 13:08:25.533566 containerd[1739]: time="2025-12-16T13:08:25.533490068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7dvks,Uid:48b113be-574b-47a2-86df-86aede15472d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f54b822d6006bb884d86f73003d648ee447fd5fa48a10b3a00f870102f04bcc\"" Dec 16 13:08:25.830219 containerd[1739]: time="2025-12-16T13:08:25.830162542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:25.834409 containerd[1739]: time="2025-12-16T13:08:25.834378596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:25.834479 containerd[1739]: time="2025-12-16T13:08:25.834457836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:25.834613 kubelet[3183]: E1216 13:08:25.834578 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:25.834825 kubelet[3183]: E1216 13:08:25.834626 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:25.835112 containerd[1739]: time="2025-12-16T13:08:25.835010885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:25.835182 kubelet[3183]: E1216 13:08:25.835079 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpkkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:25.837376 kubelet[3183]: E1216 13:08:25.837339 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:25.980913 containerd[1739]: time="2025-12-16T13:08:25.980870096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rgmjl,Uid:1270693e-5f97-4929-9900-78e53066ce6a,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:26.106149 systemd-networkd[1352]: calicbee694d3e5: Link UP Dec 16 13:08:26.107835 systemd-networkd[1352]: calicbee694d3e5: Gained carrier Dec 16 13:08:26.127820 kubelet[3183]: E1216 13:08:26.127785 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.048 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0 coredns-668d6bf9bc- kube-system 1270693e-5f97-4929-9900-78e53066ce6a 838 0 2025-12-16 13:07:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 coredns-668d6bf9bc-rgmjl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbee694d3e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.048 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.071 [INFO][4896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" HandleID="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.071 [INFO][4896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" HandleID="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"coredns-668d6bf9bc-rgmjl", "timestamp":"2025-12-16 13:08:26.071104723 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.071 [INFO][4896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.071 [INFO][4896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.071 [INFO][4896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.076 [INFO][4896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.079 [INFO][4896] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.082 [INFO][4896] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.084 [INFO][4896] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.085 [INFO][4896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.085 [INFO][4896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.086 [INFO][4896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0 Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.096 [INFO][4896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.101 [INFO][4896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.70/26] block=192.168.95.64/26 handle="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.101 [INFO][4896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.70/26] handle="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.101 [INFO][4896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:26.128407 containerd[1739]: 2025-12-16 13:08:26.101 [INFO][4896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.70/26] IPv6=[] ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" HandleID="k8s-pod-network.b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Workload="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.103 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1270693e-5f97-4929-9900-78e53066ce6a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"coredns-668d6bf9bc-rgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbee694d3e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.103 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.70/32] ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.103 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbee694d3e5 ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.108 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.108 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1270693e-5f97-4929-9900-78e53066ce6a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0", Pod:"coredns-668d6bf9bc-rgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbee694d3e5", MAC:"7e:fd:47:a9:d2:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:26.129479 containerd[1739]: 2025-12-16 13:08:26.121 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rgmjl" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-coredns--668d6bf9bc--rgmjl-eth0" Dec 16 13:08:26.175829 containerd[1739]: time="2025-12-16T13:08:26.175736093Z" level=info msg="connecting to shim b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0" address="unix:///run/containerd/s/e50bde8752964ebffd7ffce52f7fb11727cc3ae9b20a4ca653838d491308cea5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:26.204826 systemd[1]: Started cri-containerd-b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0.scope - libcontainer container b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0. Dec 16 13:08:26.215065 containerd[1739]: time="2025-12-16T13:08:26.214962039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:26.220718 containerd[1739]: time="2025-12-16T13:08:26.220686408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:26.220882 containerd[1739]: time="2025-12-16T13:08:26.220786210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:26.221164 kubelet[3183]: E1216 13:08:26.221036 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:26.221236 kubelet[3183]: E1216 13:08:26.221176 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:26.221825 kubelet[3183]: E1216 13:08:26.221774 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdqtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:26.224593 kubelet[3183]: E1216 13:08:26.224558 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:08:26.261286 containerd[1739]: time="2025-12-16T13:08:26.261237461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rgmjl,Uid:1270693e-5f97-4929-9900-78e53066ce6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0\"" Dec 16 13:08:26.263426 containerd[1739]: time="2025-12-16T13:08:26.263401609Z" level=info msg="CreateContainer within sandbox \"b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:08:26.282281 containerd[1739]: time="2025-12-16T13:08:26.282258871Z" level=info msg="Container df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:26.296297 containerd[1739]: time="2025-12-16T13:08:26.296276872Z" level=info msg="CreateContainer within sandbox \"b06d45cc3d4635cf08d8a9104c810b55fc1e0f3bbe60da39b3f70373109faca0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179\"" Dec 16 13:08:26.296736 containerd[1739]: time="2025-12-16T13:08:26.296711092Z" level=info msg="StartContainer for \"df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179\"" Dec 16 13:08:26.297386 containerd[1739]: time="2025-12-16T13:08:26.297340055Z" level=info msg="connecting to shim df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179" address="unix:///run/containerd/s/e50bde8752964ebffd7ffce52f7fb11727cc3ae9b20a4ca653838d491308cea5" protocol=ttrpc version=3 Dec 16 13:08:26.318804 systemd[1]: Started cri-containerd-df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179.scope - libcontainer container df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179. Dec 16 13:08:26.350363 containerd[1739]: time="2025-12-16T13:08:26.349542452Z" level=info msg="StartContainer for \"df592c172af86cbb028ed959708823f181ab6bee8fa62679884e58ca0029f179\" returns successfully" Dec 16 13:08:26.362786 systemd-networkd[1352]: calie1fdbd84043: Gained IPv6LL Dec 16 13:08:26.426771 systemd-networkd[1352]: calic2fa0502045: Gained IPv6LL Dec 16 13:08:26.981167 containerd[1739]: time="2025-12-16T13:08:26.980754356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bv5h8,Uid:b24eac48-5262-426b-9b2f-c5c56fc3732b,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:26.988976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305090212.mount: Deactivated successfully. Dec 16 13:08:27.003276 systemd-networkd[1352]: calid19b4f04725: Gained IPv6LL Dec 16 13:08:27.087351 systemd-networkd[1352]: cali228aaaae66f: Link UP Dec 16 13:08:27.088242 systemd-networkd[1352]: cali228aaaae66f: Gained carrier Dec 16 13:08:27.113089 kubelet[3183]: I1216 13:08:27.112209 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wkz6f" podStartSLOduration=41.112190241 podStartE2EDuration="41.112190241s" podCreationTimestamp="2025-12-16 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:26.193887508 +0000 UTC m=+43.304567989" watchObservedRunningTime="2025-12-16 13:08:27.112190241 +0000 UTC m=+44.222870715" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.017 [INFO][4997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0 csi-node-driver- calico-system b24eac48-5262-426b-9b2f-c5c56fc3732b 717 0 2025-12-16 13:08:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 csi-node-driver-bv5h8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali228aaaae66f [] [] }} ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.017 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.040 [INFO][5009] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" HandleID="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Workload="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.040 [INFO][5009] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" HandleID="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Workload="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-ace8908665", "pod":"csi-node-driver-bv5h8", "timestamp":"2025-12-16 13:08:27.040393925 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.040 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.040 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.040 [INFO][5009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.044 [INFO][5009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.047 [INFO][5009] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.050 [INFO][5009] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.053 [INFO][5009] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.054 [INFO][5009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.054 [INFO][5009] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.055 [INFO][5009] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42 Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.060 [INFO][5009] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.082 [INFO][5009] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.71/26] block=192.168.95.64/26 handle="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.082 [INFO][5009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.71/26] handle="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.082 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:27.114263 containerd[1739]: 2025-12-16 13:08:27.082 [INFO][5009] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.71/26] IPv6=[] ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" HandleID="k8s-pod-network.97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Workload="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.084 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b24eac48-5262-426b-9b2f-c5c56fc3732b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"csi-node-driver-bv5h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali228aaaae66f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.084 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.71/32] ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.084 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali228aaaae66f ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.087 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.087 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b24eac48-5262-426b-9b2f-c5c56fc3732b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42", Pod:"csi-node-driver-bv5h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali228aaaae66f", MAC:"12:01:26:04:04:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:27.114773 containerd[1739]: 2025-12-16 13:08:27.111 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" Namespace="calico-system" Pod="csi-node-driver-bv5h8" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-csi--node--driver--bv5h8-eth0" Dec 16 13:08:27.138292 kubelet[3183]: E1216 13:08:27.138249 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:08:27.139930 kubelet[3183]: E1216 13:08:27.139885 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:27.168436 containerd[1739]: time="2025-12-16T13:08:27.168400059Z" level=info msg="connecting to shim 97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42" address="unix:///run/containerd/s/880c5febea9753da18b6af51e13b57209607e09631115e4eaadf3fab08d7482e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:27.183868 kubelet[3183]: I1216 13:08:27.183770 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rgmjl" podStartSLOduration=41.183747231 podStartE2EDuration="41.183747231s" podCreationTimestamp="2025-12-16 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:08:27.160732249 +0000 UTC m=+44.271412722" watchObservedRunningTime="2025-12-16 13:08:27.183747231 +0000 UTC m=+44.294427764" Dec 16 13:08:27.215802 systemd[1]: Started cri-containerd-97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42.scope - libcontainer container 97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42. Dec 16 13:08:27.267083 containerd[1739]: time="2025-12-16T13:08:27.267054100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bv5h8,Uid:b24eac48-5262-426b-9b2f-c5c56fc3732b,Namespace:calico-system,Attempt:0,} returns sandbox id \"97134c0d9410b8e41f4c6b9bd94a188b5498909bc6a0cc46ffbf80c1b4c57a42\"" Dec 16 13:08:27.268886 containerd[1739]: time="2025-12-16T13:08:27.268832922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:27.633366 containerd[1739]: time="2025-12-16T13:08:27.633143366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:27.636193 containerd[1739]: time="2025-12-16T13:08:27.636117704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:27.636193 containerd[1739]: time="2025-12-16T13:08:27.636120526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:27.636645 kubelet[3183]: E1216 13:08:27.636330 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:27.636645 kubelet[3183]: E1216 13:08:27.636398 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:27.636645 kubelet[3183]: E1216 13:08:27.636533 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:27.638747 containerd[1739]: time="2025-12-16T13:08:27.638702862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:27.898864 systemd-networkd[1352]: calicbee694d3e5: Gained IPv6LL Dec 16 13:08:27.980654 containerd[1739]: time="2025-12-16T13:08:27.980610156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-qq9w6,Uid:732c00ab-68ae-445e-a71a-f5b84da1878e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:27.994882 containerd[1739]: time="2025-12-16T13:08:27.992844907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:27.996315 containerd[1739]: time="2025-12-16T13:08:27.996265550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:27.996441 containerd[1739]: time="2025-12-16T13:08:27.996373753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:27.997434 kubelet[3183]: E1216 13:08:27.996728 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:27.997434 kubelet[3183]: E1216 13:08:27.996784 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:27.997434 kubelet[3183]: E1216 13:08:27.996902 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:27.998368 kubelet[3183]: E1216 13:08:27.998332 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:28.114910 systemd-networkd[1352]: caliba439657d3a: Link UP Dec 16 13:08:28.115874 systemd-networkd[1352]: caliba439657d3a: Gained carrier Dec 16 13:08:28.145117 kubelet[3183]: E1216 13:08:28.145043 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.029 [INFO][5071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0 calico-apiserver-5bbcddbc87- calico-apiserver 732c00ab-68ae-445e-a71a-f5b84da1878e 835 0 2025-12-16 13:07:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbcddbc87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-ace8908665 calico-apiserver-5bbcddbc87-qq9w6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba439657d3a [] [] }} ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.029 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.052 [INFO][5082] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" HandleID="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.052 [INFO][5082] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" HandleID="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-ace8908665", "pod":"calico-apiserver-5bbcddbc87-qq9w6", "timestamp":"2025-12-16 13:08:28.052747711 +0000 UTC"}, Hostname:"ci-4459.2.2-a-ace8908665", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.053 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.053 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.053 [INFO][5082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-ace8908665' Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.057 [INFO][5082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.061 [INFO][5082] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.063 [INFO][5082] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.065 [INFO][5082] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.069 [INFO][5082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.069 [INFO][5082] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.074 [INFO][5082] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028 Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.084 [INFO][5082] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.102 [INFO][5082] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.72/26] block=192.168.95.64/26 handle="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.102 [INFO][5082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.72/26] handle="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" host="ci-4459.2.2-a-ace8908665" Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.102 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:08:28.154084 containerd[1739]: 2025-12-16 13:08:28.102 [INFO][5082] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.72/26] IPv6=[] ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" HandleID="k8s-pod-network.a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Workload="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.105 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0", GenerateName:"calico-apiserver-5bbcddbc87-", Namespace:"calico-apiserver", SelfLink:"", UID:"732c00ab-68ae-445e-a71a-f5b84da1878e", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbcddbc87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"", Pod:"calico-apiserver-5bbcddbc87-qq9w6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba439657d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.105 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.72/32] ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.105 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba439657d3a ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.115 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.119 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0", GenerateName:"calico-apiserver-5bbcddbc87-", Namespace:"calico-apiserver", SelfLink:"", UID:"732c00ab-68ae-445e-a71a-f5b84da1878e", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbcddbc87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-ace8908665", ContainerID:"a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028", Pod:"calico-apiserver-5bbcddbc87-qq9w6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba439657d3a", MAC:"e6:63:9f:8a:f1:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:08:28.154616 containerd[1739]: 2025-12-16 13:08:28.151 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" Namespace="calico-apiserver" Pod="calico-apiserver-5bbcddbc87-qq9w6" WorkloadEndpoint="ci--4459.2.2--a--ace8908665-k8s-calico--apiserver--5bbcddbc87--qq9w6-eth0" Dec 16 13:08:28.208405 containerd[1739]: time="2025-12-16T13:08:28.208369581Z" level=info msg="connecting to shim a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028" address="unix:///run/containerd/s/7f9d44ab90cf4191b61b0c2d2c4e4055d9a41cbe8b22fcec91923a3e615ad00a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:28.241978 systemd[1]: Started cri-containerd-a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028.scope - libcontainer container a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028. Dec 16 13:08:28.282833 systemd-networkd[1352]: cali228aaaae66f: Gained IPv6LL Dec 16 13:08:28.311480 containerd[1739]: time="2025-12-16T13:08:28.311448354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbcddbc87-qq9w6,Uid:732c00ab-68ae-445e-a71a-f5b84da1878e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a25969695fec28c8b6d7f5b7eaba37d2e3ae2d2dc75edf215a6d5c2211f0b028\"" Dec 16 13:08:28.313622 containerd[1739]: time="2025-12-16T13:08:28.313563666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:28.666393 containerd[1739]: time="2025-12-16T13:08:28.666341434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:28.670529 containerd[1739]: time="2025-12-16T13:08:28.670504008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:28.670586 containerd[1739]: time="2025-12-16T13:08:28.670575907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:28.670785 kubelet[3183]: E1216 13:08:28.670729 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:28.670834 kubelet[3183]: E1216 13:08:28.670799 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:28.670976 kubelet[3183]: E1216 13:08:28.670936 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m26wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:28.672888 kubelet[3183]: E1216 13:08:28.672851 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:29.145828 kubelet[3183]: E1216 13:08:29.145695 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:29.147842 kubelet[3183]: E1216 13:08:29.147806 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:29.679690 kubelet[3183]: I1216 13:08:29.679371 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:08:29.946965 systemd-networkd[1352]: caliba439657d3a: Gained IPv6LL Dec 16 13:08:30.148303 kubelet[3183]: E1216 13:08:30.148263 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:31.981287 containerd[1739]: time="2025-12-16T13:08:31.981221454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:08:32.359621 containerd[1739]: time="2025-12-16T13:08:32.359485406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:32.362657 containerd[1739]: time="2025-12-16T13:08:32.362614773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:08:32.362955 containerd[1739]: time="2025-12-16T13:08:32.362648456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:08:32.363332 kubelet[3183]: E1216 13:08:32.363161 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:32.363332 kubelet[3183]: E1216 13:08:32.363312 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:08:32.364338 kubelet[3183]: E1216 13:08:32.364201 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:744f40c00f6d4b9f9afec70a4a976be7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:32.367550 containerd[1739]: time="2025-12-16T13:08:32.367529516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:08:32.754704 containerd[1739]: time="2025-12-16T13:08:32.754597229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:32.757914 containerd[1739]: time="2025-12-16T13:08:32.757849373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:32.758155 containerd[1739]: time="2025-12-16T13:08:32.757888155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:08:32.758404 kubelet[3183]: E1216 13:08:32.758363 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:32.758543 kubelet[3183]: E1216 13:08:32.758487 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:08:32.758873 kubelet[3183]: E1216 13:08:32.758739 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:32.760700 kubelet[3183]: E1216 13:08:32.760208 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:08:36.983123 containerd[1739]: time="2025-12-16T13:08:36.983065731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:08:37.364756 containerd[1739]: time="2025-12-16T13:08:37.364519925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:37.367339 containerd[1739]: time="2025-12-16T13:08:37.367277098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:08:37.367430 containerd[1739]: time="2025-12-16T13:08:37.367398887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:08:37.367636 kubelet[3183]: E1216 13:08:37.367597 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:37.367917 kubelet[3183]: E1216 13:08:37.367653 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:08:37.367917 kubelet[3183]: E1216 13:08:37.367795 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qnpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:37.370704 kubelet[3183]: E1216 13:08:37.369791 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:38.981183 containerd[1739]: time="2025-12-16T13:08:38.981139576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:39.333114 containerd[1739]: time="2025-12-16T13:08:39.332977156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:39.336342 containerd[1739]: time="2025-12-16T13:08:39.336216163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:39.336342 containerd[1739]: time="2025-12-16T13:08:39.336255751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:39.336505 kubelet[3183]: E1216 13:08:39.336451 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:39.336886 kubelet[3183]: E1216 13:08:39.336517 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:39.336886 kubelet[3183]: E1216 13:08:39.336798 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpkkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:39.337957 containerd[1739]: time="2025-12-16T13:08:39.337920461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:08:39.338154 kubelet[3183]: E1216 13:08:39.337916 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:39.693941 containerd[1739]: time="2025-12-16T13:08:39.693755730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:39.696987 containerd[1739]: time="2025-12-16T13:08:39.696870294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:08:39.696987 containerd[1739]: time="2025-12-16T13:08:39.696908272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:39.697459 kubelet[3183]: E1216 13:08:39.697387 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:39.697537 kubelet[3183]: E1216 13:08:39.697507 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:08:39.697976 kubelet[3183]: E1216 13:08:39.697916 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdqtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:39.699380 kubelet[3183]: E1216 13:08:39.699338 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:08:42.981893 containerd[1739]: time="2025-12-16T13:08:42.981637883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:08:43.339720 containerd[1739]: time="2025-12-16T13:08:43.338585668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:43.343015 containerd[1739]: time="2025-12-16T13:08:43.342919974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:08:43.343015 containerd[1739]: time="2025-12-16T13:08:43.342955306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:08:43.343166 kubelet[3183]: E1216 13:08:43.343119 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:43.343493 kubelet[3183]: E1216 13:08:43.343181 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:08:43.343656 kubelet[3183]: E1216 13:08:43.343622 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:43.345531 containerd[1739]: time="2025-12-16T13:08:43.345509864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:08:43.724686 containerd[1739]: time="2025-12-16T13:08:43.724092446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:43.730445 containerd[1739]: time="2025-12-16T13:08:43.730331057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:08:43.730445 containerd[1739]: time="2025-12-16T13:08:43.730415651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:08:43.730623 kubelet[3183]: E1216 13:08:43.730557 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:43.731486 kubelet[3183]: E1216 13:08:43.730635 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:08:43.731570 kubelet[3183]: E1216 13:08:43.731508 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:43.732800 kubelet[3183]: E1216 13:08:43.732757 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:43.983395 containerd[1739]: time="2025-12-16T13:08:43.980635194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:08:44.345295 containerd[1739]: time="2025-12-16T13:08:44.345129950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:08:44.348020 containerd[1739]: time="2025-12-16T13:08:44.347963399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:08:44.348236 containerd[1739]: time="2025-12-16T13:08:44.347975290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:08:44.348261 kubelet[3183]: E1216 13:08:44.348160 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:44.348538 kubelet[3183]: E1216 13:08:44.348512 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:08:44.348955 kubelet[3183]: E1216 13:08:44.348642 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m26wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:08:44.349849 kubelet[3183]: E1216 13:08:44.349820 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:45.981681 kubelet[3183]: E1216 13:08:45.981591 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:08:50.980933 kubelet[3183]: E1216 13:08:50.980648 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:08:51.981236 kubelet[3183]: E1216 13:08:51.981151 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:08:51.981236 kubelet[3183]: E1216 13:08:51.981151 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:08:56.982057 kubelet[3183]: E1216 13:08:56.981346 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:08:58.985626 kubelet[3183]: E1216 13:08:58.985580 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:08:59.981784 containerd[1739]: time="2025-12-16T13:08:59.981740971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:00.350851 containerd[1739]: time="2025-12-16T13:09:00.350445204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:00.353601 containerd[1739]: time="2025-12-16T13:09:00.353570752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:00.353713 containerd[1739]: time="2025-12-16T13:09:00.353626526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:09:00.353837 kubelet[3183]: E1216 13:09:00.353784 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:00.354153 kubelet[3183]: E1216 13:09:00.353837 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:00.354153 kubelet[3183]: E1216 13:09:00.353953 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:744f40c00f6d4b9f9afec70a4a976be7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:00.356603 containerd[1739]: time="2025-12-16T13:09:00.356574443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:00.720825 containerd[1739]: time="2025-12-16T13:09:00.720781874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:00.723601 containerd[1739]: time="2025-12-16T13:09:00.723539552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:00.723601 containerd[1739]: time="2025-12-16T13:09:00.723574378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:00.723947 kubelet[3183]: E1216 13:09:00.723891 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:00.724509 kubelet[3183]: E1216 13:09:00.724013 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:00.724509 kubelet[3183]: E1216 13:09:00.724145 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:00.725945 kubelet[3183]: E1216 13:09:00.725903 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:02.983192 containerd[1739]: time="2025-12-16T13:09:02.983150380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:03.357996 containerd[1739]: time="2025-12-16T13:09:03.357856313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:03.360729 containerd[1739]: time="2025-12-16T13:09:03.360650733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:03.360836 containerd[1739]: time="2025-12-16T13:09:03.360764383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:03.360952 kubelet[3183]: E1216 13:09:03.360915 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:03.361594 kubelet[3183]: E1216 13:09:03.360962 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:03.361594 kubelet[3183]: E1216 13:09:03.361107 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpkkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:03.362858 kubelet[3183]: E1216 13:09:03.362820 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:09:03.985296 containerd[1739]: time="2025-12-16T13:09:03.985249092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:04.378380 containerd[1739]: time="2025-12-16T13:09:04.378173347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:04.381100 containerd[1739]: time="2025-12-16T13:09:04.381006006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:04.381227 containerd[1739]: time="2025-12-16T13:09:04.381186177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:04.382587 kubelet[3183]: E1216 13:09:04.381363 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:04.382587 kubelet[3183]: E1216 13:09:04.381412 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:04.382587 kubelet[3183]: E1216 13:09:04.381552 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdqtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:04.383272 kubelet[3183]: E1216 13:09:04.383232 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:09:04.983933 containerd[1739]: time="2025-12-16T13:09:04.983701986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:05.341731 containerd[1739]: time="2025-12-16T13:09:05.341595132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:05.344818 containerd[1739]: time="2025-12-16T13:09:05.344642840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:05.344818 containerd[1739]: time="2025-12-16T13:09:05.344768391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:05.344953 kubelet[3183]: E1216 13:09:05.344897 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:05.344994 kubelet[3183]: E1216 13:09:05.344957 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:05.345148 kubelet[3183]: E1216 13:09:05.345094 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qnpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:05.346677 kubelet[3183]: E1216 13:09:05.346624 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:09:10.981691 containerd[1739]: time="2025-12-16T13:09:10.981063760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:11.345862 containerd[1739]: time="2025-12-16T13:09:11.345513070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:11.349605 containerd[1739]: time="2025-12-16T13:09:11.349473193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:11.349605 containerd[1739]: time="2025-12-16T13:09:11.349557153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:09:11.349771 kubelet[3183]: E1216 13:09:11.349697 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:11.349771 kubelet[3183]: E1216 13:09:11.349740 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:11.350048 kubelet[3183]: E1216 13:09:11.349855 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:11.352470 containerd[1739]: time="2025-12-16T13:09:11.352440777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:11.702871 containerd[1739]: time="2025-12-16T13:09:11.702825025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:11.705918 containerd[1739]: time="2025-12-16T13:09:11.705785483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:11.705918 containerd[1739]: time="2025-12-16T13:09:11.705889345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:09:11.706241 kubelet[3183]: E1216 13:09:11.706187 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:11.706306 kubelet[3183]: E1216 13:09:11.706241 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:11.706436 kubelet[3183]: E1216 13:09:11.706394 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:11.707814 kubelet[3183]: E1216 13:09:11.707761 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:09:11.984006 containerd[1739]: time="2025-12-16T13:09:11.983727119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:11.985043 kubelet[3183]: E1216 13:09:11.984558 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:12.349697 containerd[1739]: time="2025-12-16T13:09:12.348252253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:12.351292 containerd[1739]: time="2025-12-16T13:09:12.351243502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:12.351382 containerd[1739]: time="2025-12-16T13:09:12.351342583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:12.352885 kubelet[3183]: E1216 13:09:12.352845 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:12.353197 kubelet[3183]: E1216 13:09:12.352906 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:12.353197 kubelet[3183]: E1216 13:09:12.353142 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m26wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:12.354558 kubelet[3183]: E1216 13:09:12.354521 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:09:14.981216 kubelet[3183]: E1216 13:09:14.981167 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:09:15.981645 kubelet[3183]: E1216 13:09:15.981590 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:09:16.981985 kubelet[3183]: E1216 13:09:16.981611 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:09:22.986001 kubelet[3183]: E1216 13:09:22.985584 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:25.981129 kubelet[3183]: E1216 13:09:25.981056 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:09:26.981922 kubelet[3183]: E1216 13:09:26.981834 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:09:27.081913 systemd[1]: Started sshd@7-10.200.0.12:22-10.200.16.10:57094.service - OpenSSH per-connection server daemon (10.200.16.10:57094). Dec 16 13:09:27.639357 sshd[5258]: Accepted publickey for core from 10.200.16.10 port 57094 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:27.640440 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:27.648023 systemd-logind[1709]: New session 10 of user core. Dec 16 13:09:27.652824 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:09:27.983685 kubelet[3183]: E1216 13:09:27.982950 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:09:27.983685 kubelet[3183]: E1216 13:09:27.983293 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:09:28.124284 sshd[5261]: Connection closed by 10.200.16.10 port 57094 Dec 16 13:09:28.124767 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:28.128584 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:09:28.129953 systemd[1]: sshd@7-10.200.0.12:22-10.200.16.10:57094.service: Deactivated successfully. Dec 16 13:09:28.133861 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:09:28.136893 systemd-logind[1709]: Removed session 10. Dec 16 13:09:28.983561 kubelet[3183]: E1216 13:09:28.983161 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:09:33.224414 systemd[1]: Started sshd@8-10.200.0.12:22-10.200.16.10:52762.service - OpenSSH per-connection server daemon (10.200.16.10:52762). Dec 16 13:09:33.781723 sshd[5299]: Accepted publickey for core from 10.200.16.10 port 52762 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:33.782467 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:33.790503 systemd-logind[1709]: New session 11 of user core. Dec 16 13:09:33.796830 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:09:33.981026 kubelet[3183]: E1216 13:09:33.980992 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:34.225823 sshd[5302]: Connection closed by 10.200.16.10 port 52762 Dec 16 13:09:34.227201 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:34.230309 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:09:34.230454 systemd[1]: sshd@8-10.200.0.12:22-10.200.16.10:52762.service: Deactivated successfully. Dec 16 13:09:34.232381 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:09:34.233945 systemd-logind[1709]: Removed session 11. Dec 16 13:09:36.982780 kubelet[3183]: E1216 13:09:36.982476 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:09:39.328262 systemd[1]: Started sshd@9-10.200.0.12:22-10.200.16.10:52772.service - OpenSSH per-connection server daemon (10.200.16.10:52772). Dec 16 13:09:39.885414 sshd[5315]: Accepted publickey for core from 10.200.16.10 port 52772 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:39.886472 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:39.890742 systemd-logind[1709]: New session 12 of user core. Dec 16 13:09:39.894839 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:09:39.980927 kubelet[3183]: E1216 13:09:39.980891 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:09:40.325956 sshd[5318]: Connection closed by 10.200.16.10 port 52772 Dec 16 13:09:40.326420 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:40.330415 systemd[1]: sshd@9-10.200.0.12:22-10.200.16.10:52772.service: Deactivated successfully. Dec 16 13:09:40.330741 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:09:40.333653 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:09:40.335766 systemd-logind[1709]: Removed session 12. Dec 16 13:09:40.437600 systemd[1]: Started sshd@10-10.200.0.12:22-10.200.16.10:49368.service - OpenSSH per-connection server daemon (10.200.16.10:49368). Dec 16 13:09:40.985158 kubelet[3183]: E1216 13:09:40.984771 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:09:40.986918 kubelet[3183]: E1216 13:09:40.986838 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:09:40.999280 sshd[5337]: Accepted publickey for core from 10.200.16.10 port 49368 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:41.001210 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:41.011165 systemd-logind[1709]: New session 13 of user core. Dec 16 13:09:41.014852 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:09:41.500758 sshd[5340]: Connection closed by 10.200.16.10 port 49368 Dec 16 13:09:41.502836 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:41.507101 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:09:41.508931 systemd[1]: sshd@10-10.200.0.12:22-10.200.16.10:49368.service: Deactivated successfully. Dec 16 13:09:41.511348 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:09:41.512625 systemd-logind[1709]: Removed session 13. Dec 16 13:09:41.599756 systemd[1]: Started sshd@11-10.200.0.12:22-10.200.16.10:49372.service - OpenSSH per-connection server daemon (10.200.16.10:49372). Dec 16 13:09:41.980403 kubelet[3183]: E1216 13:09:41.980369 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:09:42.151106 sshd[5350]: Accepted publickey for core from 10.200.16.10 port 49372 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:42.153340 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:42.159576 systemd-logind[1709]: New session 14 of user core. Dec 16 13:09:42.165930 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:09:42.625596 sshd[5357]: Connection closed by 10.200.16.10 port 49372 Dec 16 13:09:42.626857 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:42.631472 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:09:42.632119 systemd[1]: sshd@11-10.200.0.12:22-10.200.16.10:49372.service: Deactivated successfully. Dec 16 13:09:42.634873 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:09:42.636419 systemd-logind[1709]: Removed session 14. Dec 16 13:09:44.981767 containerd[1739]: time="2025-12-16T13:09:44.981714662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:45.343994 containerd[1739]: time="2025-12-16T13:09:45.343810853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:45.350320 containerd[1739]: time="2025-12-16T13:09:45.350273189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:45.350434 containerd[1739]: time="2025-12-16T13:09:45.350351977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:09:45.350478 kubelet[3183]: E1216 13:09:45.350437 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:45.350796 kubelet[3183]: E1216 13:09:45.350483 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:45.350796 kubelet[3183]: E1216 13:09:45.350587 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:744f40c00f6d4b9f9afec70a4a976be7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:45.353132 containerd[1739]: time="2025-12-16T13:09:45.353107083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:45.722223 containerd[1739]: time="2025-12-16T13:09:45.722178701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:45.726598 containerd[1739]: time="2025-12-16T13:09:45.726123501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:45.726598 containerd[1739]: time="2025-12-16T13:09:45.726548875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:45.726955 kubelet[3183]: E1216 13:09:45.726889 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:45.726955 kubelet[3183]: E1216 13:09:45.726939 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:45.727509 kubelet[3183]: E1216 13:09:45.727470 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8sfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555f4cfd69-tjs7n_calico-system(d909e58a-0385-4774-8fd8-0e43ade4f95f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:45.729441 kubelet[3183]: E1216 13:09:45.728858 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:47.729927 systemd[1]: Started sshd@12-10.200.0.12:22-10.200.16.10:49378.service - OpenSSH per-connection server daemon (10.200.16.10:49378). Dec 16 13:09:47.982227 kubelet[3183]: E1216 13:09:47.981945 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:09:48.285757 sshd[5373]: Accepted publickey for core from 10.200.16.10 port 49378 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:48.287238 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:48.291444 systemd-logind[1709]: New session 15 of user core. Dec 16 13:09:48.295836 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:09:48.723174 sshd[5378]: Connection closed by 10.200.16.10 port 49378 Dec 16 13:09:48.724830 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:48.728809 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:09:48.730531 systemd[1]: sshd@12-10.200.0.12:22-10.200.16.10:49378.service: Deactivated successfully. Dec 16 13:09:48.733313 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:09:48.737331 systemd-logind[1709]: Removed session 15. Dec 16 13:09:50.982151 containerd[1739]: time="2025-12-16T13:09:50.982075986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:51.351085 containerd[1739]: time="2025-12-16T13:09:51.350960456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:51.354271 containerd[1739]: time="2025-12-16T13:09:51.354241337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:51.354343 containerd[1739]: time="2025-12-16T13:09:51.354310952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:09:51.354493 kubelet[3183]: E1216 13:09:51.354411 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:51.354796 kubelet[3183]: E1216 13:09:51.354503 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:51.354796 kubelet[3183]: E1216 13:09:51.354728 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qnpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76554f6877-gxtwh_calico-system(29fc32db-4a73-46de-9d39-e11c06875a97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:51.356379 kubelet[3183]: E1216 13:09:51.356197 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:09:53.829867 systemd[1]: Started sshd@13-10.200.0.12:22-10.200.16.10:51498.service - OpenSSH per-connection server daemon (10.200.16.10:51498). Dec 16 13:09:53.982644 containerd[1739]: time="2025-12-16T13:09:53.982598275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:54.358492 containerd[1739]: time="2025-12-16T13:09:54.358344672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:54.361115 containerd[1739]: time="2025-12-16T13:09:54.361018730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:54.361115 containerd[1739]: time="2025-12-16T13:09:54.361036504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:54.361277 kubelet[3183]: E1216 13:09:54.361244 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:54.361528 kubelet[3183]: E1216 13:09:54.361291 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:54.361528 kubelet[3183]: E1216 13:09:54.361428 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m26wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-qq9w6_calico-apiserver(732c00ab-68ae-445e-a71a-f5b84da1878e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:54.362948 kubelet[3183]: E1216 13:09:54.362910 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:09:54.401076 sshd[5397]: Accepted publickey for core from 10.200.16.10 port 51498 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:09:54.402191 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:54.406801 systemd-logind[1709]: New session 16 of user core. Dec 16 13:09:54.417784 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:09:54.836749 sshd[5400]: Connection closed by 10.200.16.10 port 51498 Dec 16 13:09:54.837238 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:54.840229 systemd[1]: sshd@13-10.200.0.12:22-10.200.16.10:51498.service: Deactivated successfully. Dec 16 13:09:54.842189 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:09:54.842936 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:09:54.844710 systemd-logind[1709]: Removed session 16. Dec 16 13:09:55.980533 containerd[1739]: time="2025-12-16T13:09:55.980464826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:56.341445 containerd[1739]: time="2025-12-16T13:09:56.340820022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:56.362234 containerd[1739]: time="2025-12-16T13:09:56.362161511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:56.362234 containerd[1739]: time="2025-12-16T13:09:56.362198037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:56.362406 kubelet[3183]: E1216 13:09:56.362331 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:56.362406 kubelet[3183]: E1216 13:09:56.362387 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:56.362718 kubelet[3183]: E1216 13:09:56.362652 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdqtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7dvks_calico-system(48b113be-574b-47a2-86df-86aede15472d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:56.362988 containerd[1739]: time="2025-12-16T13:09:56.362964981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:56.364278 kubelet[3183]: E1216 13:09:56.364242 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:09:56.731391 containerd[1739]: time="2025-12-16T13:09:56.731341357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:56.734157 containerd[1739]: time="2025-12-16T13:09:56.734035199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:56.734157 containerd[1739]: time="2025-12-16T13:09:56.734130537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:09:56.735154 kubelet[3183]: E1216 13:09:56.734470 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:56.735154 kubelet[3183]: E1216 13:09:56.734514 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:56.735154 kubelet[3183]: E1216 13:09:56.734646 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpkkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbcddbc87-8ttxw_calico-apiserver(543c36ea-093c-4498-a84b-c504d49ef8b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:56.736096 kubelet[3183]: E1216 13:09:56.736065 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:09:56.986163 kubelet[3183]: E1216 13:09:56.986030 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:09:59.935972 systemd[1]: Started sshd@14-10.200.0.12:22-10.200.16.10:51506.service - OpenSSH per-connection server daemon (10.200.16.10:51506). Dec 16 13:10:00.518466 sshd[5450]: Accepted publickey for core from 10.200.16.10 port 51506 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:00.521097 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:00.525280 systemd-logind[1709]: New session 17 of user core. Dec 16 13:10:00.533963 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:10:00.954213 sshd[5453]: Connection closed by 10.200.16.10 port 51506 Dec 16 13:10:00.955573 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:00.957947 systemd[1]: sshd@14-10.200.0.12:22-10.200.16.10:51506.service: Deactivated successfully. Dec 16 13:10:00.959813 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:10:00.961107 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:10:00.962658 systemd-logind[1709]: Removed session 17. Dec 16 13:10:01.059866 systemd[1]: Started sshd@15-10.200.0.12:22-10.200.16.10:39444.service - OpenSSH per-connection server daemon (10.200.16.10:39444). Dec 16 13:10:01.617609 sshd[5465]: Accepted publickey for core from 10.200.16.10 port 39444 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:01.620238 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:01.626063 systemd-logind[1709]: New session 18 of user core. Dec 16 13:10:01.634090 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:10:01.982900 containerd[1739]: time="2025-12-16T13:10:01.982862414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:10:02.159707 sshd[5468]: Connection closed by 10.200.16.10 port 39444 Dec 16 13:10:02.161881 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:02.167558 systemd[1]: sshd@15-10.200.0.12:22-10.200.16.10:39444.service: Deactivated successfully. Dec 16 13:10:02.170280 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:10:02.172250 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:10:02.174123 systemd-logind[1709]: Removed session 18. Dec 16 13:10:02.269499 systemd[1]: Started sshd@16-10.200.0.12:22-10.200.16.10:39446.service - OpenSSH per-connection server daemon (10.200.16.10:39446). Dec 16 13:10:02.437877 containerd[1739]: time="2025-12-16T13:10:02.437837306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:02.445523 containerd[1739]: time="2025-12-16T13:10:02.445435289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:10:02.445770 containerd[1739]: time="2025-12-16T13:10:02.445509167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:10:02.446036 kubelet[3183]: E1216 13:10:02.445999 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:10:02.446382 kubelet[3183]: E1216 13:10:02.446070 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:10:02.446382 kubelet[3183]: E1216 13:10:02.446210 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:02.449444 containerd[1739]: time="2025-12-16T13:10:02.448846886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:10:02.827948 sshd[5477]: Accepted publickey for core from 10.200.16.10 port 39446 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:02.829052 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:02.833057 systemd-logind[1709]: New session 19 of user core. Dec 16 13:10:02.839808 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:10:02.842942 containerd[1739]: time="2025-12-16T13:10:02.842900627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:02.846321 containerd[1739]: time="2025-12-16T13:10:02.846275295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:10:02.846497 containerd[1739]: time="2025-12-16T13:10:02.846313308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:10:02.846780 kubelet[3183]: E1216 13:10:02.846738 3183 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:10:02.846836 kubelet[3183]: E1216 13:10:02.846788 3183 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:10:02.846940 kubelet[3183]: E1216 13:10:02.846910 3183 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bv5h8_calico-system(b24eac48-5262-426b-9b2f-c5c56fc3732b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:02.848300 kubelet[3183]: E1216 13:10:02.848255 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:10:03.981806 sshd[5480]: Connection closed by 10.200.16.10 port 39446 Dec 16 13:10:03.982499 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:03.986445 systemd[1]: sshd@16-10.200.0.12:22-10.200.16.10:39446.service: Deactivated successfully. Dec 16 13:10:03.989246 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:10:03.989939 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:10:03.992340 systemd-logind[1709]: Removed session 19. Dec 16 13:10:04.082503 systemd[1]: Started sshd@17-10.200.0.12:22-10.200.16.10:39460.service - OpenSSH per-connection server daemon (10.200.16.10:39460). Dec 16 13:10:04.640704 sshd[5497]: Accepted publickey for core from 10.200.16.10 port 39460 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:04.641529 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:04.646743 systemd-logind[1709]: New session 20 of user core. Dec 16 13:10:04.648943 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:10:04.985074 kubelet[3183]: E1216 13:10:04.985034 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:10:05.294427 sshd[5500]: Connection closed by 10.200.16.10 port 39460 Dec 16 13:10:05.295021 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:05.298431 systemd[1]: sshd@17-10.200.0.12:22-10.200.16.10:39460.service: Deactivated successfully. Dec 16 13:10:05.300380 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:10:05.301279 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:10:05.302713 systemd-logind[1709]: Removed session 20. Dec 16 13:10:05.395005 systemd[1]: Started sshd@18-10.200.0.12:22-10.200.16.10:39472.service - OpenSSH per-connection server daemon (10.200.16.10:39472). Dec 16 13:10:05.954794 sshd[5510]: Accepted publickey for core from 10.200.16.10 port 39472 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:05.956038 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:05.960265 systemd-logind[1709]: New session 21 of user core. Dec 16 13:10:05.962835 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:10:06.418684 sshd[5513]: Connection closed by 10.200.16.10 port 39472 Dec 16 13:10:06.420148 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:06.423599 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:10:06.424488 systemd[1]: sshd@18-10.200.0.12:22-10.200.16.10:39472.service: Deactivated successfully. Dec 16 13:10:06.427872 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:10:06.431372 systemd-logind[1709]: Removed session 21. Dec 16 13:10:07.981683 kubelet[3183]: E1216 13:10:07.981339 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:10:09.984030 kubelet[3183]: E1216 13:10:09.983787 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:10:09.984030 kubelet[3183]: E1216 13:10:09.983853 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:10:09.984808 kubelet[3183]: E1216 13:10:09.984687 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:10:11.527849 systemd[1]: Started sshd@19-10.200.0.12:22-10.200.16.10:37236.service - OpenSSH per-connection server daemon (10.200.16.10:37236). Dec 16 13:10:12.102983 sshd[5527]: Accepted publickey for core from 10.200.16.10 port 37236 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:12.104085 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:12.108517 systemd-logind[1709]: New session 22 of user core. Dec 16 13:10:12.113827 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:10:12.553186 sshd[5530]: Connection closed by 10.200.16.10 port 37236 Dec 16 13:10:12.553726 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:12.557153 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:10:12.557297 systemd[1]: sshd@19-10.200.0.12:22-10.200.16.10:37236.service: Deactivated successfully. Dec 16 13:10:12.559133 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:10:12.560289 systemd-logind[1709]: Removed session 22. Dec 16 13:10:12.986096 kubelet[3183]: E1216 13:10:12.986045 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:10:15.982276 kubelet[3183]: E1216 13:10:15.982231 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:10:17.659126 systemd[1]: Started sshd@20-10.200.0.12:22-10.200.16.10:37250.service - OpenSSH per-connection server daemon (10.200.16.10:37250). Dec 16 13:10:18.226807 sshd[5544]: Accepted publickey for core from 10.200.16.10 port 37250 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:18.227960 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:18.232125 systemd-logind[1709]: New session 23 of user core. Dec 16 13:10:18.234813 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:10:18.662272 sshd[5547]: Connection closed by 10.200.16.10 port 37250 Dec 16 13:10:18.663716 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:18.666779 systemd[1]: sshd@20-10.200.0.12:22-10.200.16.10:37250.service: Deactivated successfully. Dec 16 13:10:18.668657 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:10:18.669360 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:10:18.671307 systemd-logind[1709]: Removed session 23. Dec 16 13:10:18.984528 kubelet[3183]: E1216 13:10:18.984211 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:10:20.983690 kubelet[3183]: E1216 13:10:20.983186 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:10:21.981640 kubelet[3183]: E1216 13:10:21.981070 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:10:23.762615 systemd[1]: Started sshd@21-10.200.0.12:22-10.200.16.10:41550.service - OpenSSH per-connection server daemon (10.200.16.10:41550). Dec 16 13:10:23.983094 kubelet[3183]: E1216 13:10:23.983046 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:10:24.323314 sshd[5559]: Accepted publickey for core from 10.200.16.10 port 41550 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:24.324533 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:24.331782 systemd-logind[1709]: New session 24 of user core. Dec 16 13:10:24.336128 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:10:24.757416 sshd[5562]: Connection closed by 10.200.16.10 port 41550 Dec 16 13:10:24.758831 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:24.765839 systemd[1]: sshd@21-10.200.0.12:22-10.200.16.10:41550.service: Deactivated successfully. Dec 16 13:10:24.767932 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:10:24.770029 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:10:24.771020 systemd-logind[1709]: Removed session 24. Dec 16 13:10:24.982212 kubelet[3183]: E1216 13:10:24.982173 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f" Dec 16 13:10:26.980775 kubelet[3183]: E1216 13:10:26.980258 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:10:29.858108 systemd[1]: Started sshd@22-10.200.0.12:22-10.200.16.10:41564.service - OpenSSH per-connection server daemon (10.200.16.10:41564). Dec 16 13:10:30.421125 sshd[5596]: Accepted publickey for core from 10.200.16.10 port 41564 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:30.422611 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:30.427439 systemd-logind[1709]: New session 25 of user core. Dec 16 13:10:30.435828 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:10:30.883787 sshd[5599]: Connection closed by 10.200.16.10 port 41564 Dec 16 13:10:30.885903 sshd-session[5596]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:30.889405 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:10:30.890417 systemd[1]: sshd@22-10.200.0.12:22-10.200.16.10:41564.service: Deactivated successfully. Dec 16 13:10:30.892692 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:10:30.894866 systemd-logind[1709]: Removed session 25. Dec 16 13:10:30.983248 kubelet[3183]: E1216 13:10:30.983068 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-qq9w6" podUID="732c00ab-68ae-445e-a71a-f5b84da1878e" Dec 16 13:10:32.990184 kubelet[3183]: E1216 13:10:32.989150 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbcddbc87-8ttxw" podUID="543c36ea-093c-4498-a84b-c504d49ef8b8" Dec 16 13:10:32.992691 kubelet[3183]: E1216 13:10:32.990928 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7dvks" podUID="48b113be-574b-47a2-86df-86aede15472d" Dec 16 13:10:35.991001 systemd[1]: Started sshd@23-10.200.0.12:22-10.200.16.10:37730.service - OpenSSH per-connection server daemon (10.200.16.10:37730). Dec 16 13:10:36.541273 sshd[5611]: Accepted publickey for core from 10.200.16.10 port 37730 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:10:36.542454 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:36.548384 systemd-logind[1709]: New session 26 of user core. Dec 16 13:10:36.552796 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:10:36.984026 kubelet[3183]: E1216 13:10:36.983899 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bv5h8" podUID="b24eac48-5262-426b-9b2f-c5c56fc3732b" Dec 16 13:10:37.026534 sshd[5614]: Connection closed by 10.200.16.10 port 37730 Dec 16 13:10:37.028879 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:37.032404 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:10:37.034138 systemd[1]: sshd@23-10.200.0.12:22-10.200.16.10:37730.service: Deactivated successfully. Dec 16 13:10:37.037212 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:10:37.040120 systemd-logind[1709]: Removed session 26. Dec 16 13:10:37.980705 kubelet[3183]: E1216 13:10:37.980411 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76554f6877-gxtwh" podUID="29fc32db-4a73-46de-9d39-e11c06875a97" Dec 16 13:10:38.983907 kubelet[3183]: E1216 13:10:38.983855 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f4cfd69-tjs7n" podUID="d909e58a-0385-4774-8fd8-0e43ade4f95f"