Nov 6 00:22:35.970840 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:22:35.970864 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:35.970877 kernel: BIOS-provided physical RAM map: Nov 6 00:22:35.970885 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:22:35.970892 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 6 00:22:35.970899 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 6 00:22:35.970908 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 6 00:22:35.970915 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 6 00:22:35.970922 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 6 00:22:35.970931 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 6 00:22:35.970939 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 6 00:22:35.970945 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 6 00:22:35.970951 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 6 00:22:35.970957 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 6 00:22:35.970964 kernel: NX (Execute Disable) protection: active Nov 6 00:22:35.970972 kernel: APIC: Static calls initialized Nov 6 00:22:35.970979 kernel: efi: EFI v2.7 by Microsoft Nov 6 00:22:35.970985 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Nov 6 00:22:35.970991 kernel: random: crng init done Nov 6 00:22:35.971001 kernel: secureboot: Secure boot disabled Nov 6 00:22:35.971012 kernel: SMBIOS 3.1.0 present. Nov 6 00:22:35.971021 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 6 00:22:35.971027 kernel: DMI: Memory slots populated: 2/2 Nov 6 00:22:35.971033 kernel: Hypervisor detected: Microsoft Hyper-V Nov 6 00:22:35.971040 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 6 00:22:35.971046 kernel: Hyper-V: Nested features: 0x3e0101 Nov 6 00:22:35.971054 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 6 00:22:35.971060 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 6 00:22:35.971067 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:22:35.971073 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:22:35.971079 kernel: tsc: Detected 2299.999 MHz processor Nov 6 00:22:35.971085 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:22:35.971093 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:22:35.971100 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 6 00:22:35.971107 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:22:35.971116 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:22:35.971123 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 6 00:22:35.971129 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 6 00:22:35.971136 kernel: Using GB pages for direct mapping Nov 6 00:22:35.971143 kernel: ACPI: Early table checksum verification disabled Nov 6 00:22:35.971152 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 6 00:22:35.971162 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971175 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971182 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 6 00:22:35.971189 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 6 00:22:35.971196 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971203 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971210 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971217 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:22:35.971225 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:22:35.971232 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:22:35.971239 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 6 00:22:35.971247 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 6 00:22:35.971254 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 6 00:22:35.971261 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 6 00:22:35.971269 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 6 00:22:35.971276 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 6 00:22:35.971284 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 6 00:22:35.971293 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 6 00:22:35.971300 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 6 00:22:35.971308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 6 00:22:35.971315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 6 00:22:35.971323 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 6 00:22:35.971331 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 6 00:22:35.971338 kernel: Zone ranges: Nov 6 00:22:35.971345 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:22:35.971353 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 00:22:35.971362 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:22:35.971369 kernel: Device empty Nov 6 00:22:35.971378 kernel: Movable zone start for each node Nov 6 00:22:35.971385 kernel: Early memory node ranges Nov 6 00:22:35.971393 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 00:22:35.971400 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 6 00:22:35.971407 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 6 00:22:35.971414 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 6 00:22:35.971422 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:22:35.971430 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 6 00:22:35.971437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:22:35.971444 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 00:22:35.971452 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 6 00:22:35.971459 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 6 00:22:35.971466 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 6 00:22:35.971473 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:22:35.971480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:22:35.971488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:22:35.971496 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 6 00:22:35.971504 kernel: TSC deadline timer available Nov 6 00:22:35.971511 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:22:35.971518 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:22:35.971525 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:22:35.971532 kernel: CPU topo: Max. threads per core: 2 Nov 6 00:22:35.971539 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:22:35.971547 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:22:35.971554 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:22:35.971579 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 6 00:22:35.971586 kernel: Booting paravirtualized kernel on Hyper-V Nov 6 00:22:35.971593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:22:35.971601 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:22:35.971608 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:22:35.971615 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:22:35.971622 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:22:35.971629 kernel: Hyper-V: PV spinlocks enabled Nov 6 00:22:35.971636 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:22:35.971647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:35.971655 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 00:22:35.971662 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:22:35.971669 kernel: Fallback order for Node 0: 0 Nov 6 00:22:35.971677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 6 00:22:35.971684 kernel: Policy zone: Normal Nov 6 00:22:35.971691 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:22:35.971698 kernel: software IO TLB: area num 2. Nov 6 00:22:35.971707 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:22:35.971714 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:22:35.971721 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:22:35.971729 kernel: Dynamic Preempt: voluntary Nov 6 00:22:35.971736 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:22:35.971744 kernel: rcu: RCU event tracing is enabled. Nov 6 00:22:35.971751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:22:35.971766 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:22:35.971774 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:22:35.971781 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:22:35.971789 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:22:35.971797 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:22:35.971806 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:35.971814 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:35.971822 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:35.971830 kernel: Using NULL legacy PIC Nov 6 00:22:35.971838 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 6 00:22:35.971847 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:22:35.971854 kernel: Console: colour dummy device 80x25 Nov 6 00:22:35.971862 kernel: printk: legacy console [tty1] enabled Nov 6 00:22:35.971869 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:22:35.971877 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 6 00:22:35.971884 kernel: ACPI: Core revision 20240827 Nov 6 00:22:35.971892 kernel: Failed to register legacy timer interrupt Nov 6 00:22:35.971900 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:22:35.971908 kernel: x2apic enabled Nov 6 00:22:35.971917 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:22:35.971924 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 6 00:22:35.971932 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 6 00:22:35.971940 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 6 00:22:35.971948 kernel: Hyper-V: Using IPI hypercalls Nov 6 00:22:35.971955 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 6 00:22:35.971963 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 6 00:22:35.971970 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 6 00:22:35.971978 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 6 00:22:35.971987 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 6 00:22:35.971995 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 6 00:22:35.972003 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:22:35.972010 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 6 00:22:35.972018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:22:35.972026 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 00:22:35.972033 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 00:22:35.972041 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:22:35.972048 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:22:35.972055 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:22:35.972065 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 6 00:22:35.972072 kernel: RETBleed: Vulnerable Nov 6 00:22:35.972080 kernel: Speculative Store Bypass: Vulnerable Nov 6 00:22:35.972087 kernel: active return thunk: its_return_thunk Nov 6 00:22:35.972095 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:22:35.972102 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:22:35.972109 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:22:35.972117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:22:35.972124 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 6 00:22:35.972132 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 6 00:22:35.972141 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 6 00:22:35.972147 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 6 00:22:35.972154 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 6 00:22:35.972160 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 6 00:22:35.972166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:22:35.972172 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 6 00:22:35.972179 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 6 00:22:35.972187 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 6 00:22:35.972194 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 6 00:22:35.972201 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 6 00:22:35.972208 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 6 00:22:35.972220 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 6 00:22:35.972232 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:22:35.972240 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:22:35.972247 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:22:35.972255 kernel: landlock: Up and running. Nov 6 00:22:35.972262 kernel: SELinux: Initializing. Nov 6 00:22:35.972270 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:22:35.972281 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:22:35.972288 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 6 00:22:35.972297 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 6 00:22:35.972305 kernel: signal: max sigframe size: 11952 Nov 6 00:22:35.972315 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:22:35.972324 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:22:35.972332 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:22:35.972340 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:22:35.972348 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:22:35.972356 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:22:35.972364 kernel: .... node #0, CPUs: #1 Nov 6 00:22:35.972372 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:22:35.972380 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 6 00:22:35.972389 kernel: Memory: 8070880K/8383228K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 306132K reserved, 0K cma-reserved) Nov 6 00:22:35.972399 kernel: devtmpfs: initialized Nov 6 00:22:35.972407 kernel: x86/mm: Memory block size: 128MB Nov 6 00:22:35.972415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 6 00:22:35.972423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:22:35.972431 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:22:35.972439 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:22:35.972447 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:22:35.972456 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:22:35.972465 kernel: audit: type=2000 audit(1762388553.028:1): state=initialized audit_enabled=0 res=1 Nov 6 00:22:35.972473 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:22:35.972481 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:22:35.972489 kernel: cpuidle: using governor menu Nov 6 00:22:35.972496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:22:35.972504 kernel: dca service started, version 1.12.1 Nov 6 00:22:35.972512 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 6 00:22:35.972520 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 6 00:22:35.972527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:22:35.972537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:22:35.972545 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:22:35.972552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:22:35.972560 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:22:35.972616 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:22:35.972623 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:22:35.972630 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:22:35.972638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:22:35.972645 kernel: ACPI: Interpreter enabled Nov 6 00:22:35.972655 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:22:35.972663 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:22:35.972671 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:22:35.972679 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 00:22:35.972687 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 6 00:22:35.972695 kernel: iommu: Default domain type: Translated Nov 6 00:22:35.972703 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:22:35.972710 kernel: efivars: Registered efivars operations Nov 6 00:22:35.972718 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:22:35.972727 kernel: PCI: System does not support PCI Nov 6 00:22:35.972735 kernel: vgaarb: loaded Nov 6 00:22:35.972743 kernel: clocksource: Switched to clocksource tsc-early Nov 6 00:22:35.972751 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:22:35.972758 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:22:35.972766 kernel: pnp: PnP ACPI init Nov 6 00:22:35.972774 kernel: pnp: PnP ACPI: found 3 devices Nov 6 00:22:35.972782 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:22:35.972790 kernel: NET: Registered PF_INET protocol family Nov 6 00:22:35.972800 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:22:35.972808 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 00:22:35.972816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:22:35.972824 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:22:35.972832 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 00:22:35.972839 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 00:22:35.972847 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:22:35.972855 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:22:35.972863 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:22:35.972872 kernel: NET: Registered PF_XDP protocol family Nov 6 00:22:35.972880 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:22:35.972888 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 00:22:35.972895 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Nov 6 00:22:35.972903 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 6 00:22:35.972911 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 6 00:22:35.972919 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:22:35.972926 kernel: clocksource: Switched to clocksource tsc Nov 6 00:22:35.972934 kernel: Initialise system trusted keyrings Nov 6 00:22:35.972943 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 00:22:35.972951 kernel: Key type asymmetric registered Nov 6 00:22:35.972958 kernel: Asymmetric key parser 'x509' registered Nov 6 00:22:35.972966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:22:35.972974 kernel: io scheduler mq-deadline registered Nov 6 00:22:35.972981 kernel: io scheduler kyber registered Nov 6 00:22:35.972989 kernel: io scheduler bfq registered Nov 6 00:22:35.972997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:22:35.973004 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:22:35.973014 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:35.973022 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 00:22:35.973030 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:35.973038 kernel: i8042: PNP: No PS/2 controller found. Nov 6 00:22:35.973156 kernel: rtc_cmos 00:02: registered as rtc0 Nov 6 00:22:35.973294 kernel: rtc_cmos 00:02: setting system clock to 2025-11-06T00:22:35 UTC (1762388555) Nov 6 00:22:35.973362 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 6 00:22:35.973374 kernel: intel_pstate: Intel P-state driver initializing Nov 6 00:22:35.973382 kernel: efifb: probing for efifb Nov 6 00:22:35.973407 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 6 00:22:35.973605 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 6 00:22:35.973616 kernel: efifb: scrolling: redraw Nov 6 00:22:35.973624 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:22:35.973632 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:22:35.973640 kernel: fb0: EFI VGA frame buffer device Nov 6 00:22:35.973648 kernel: pstore: Using crash dump compression: deflate Nov 6 00:22:35.973659 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:22:35.973667 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:22:35.973675 kernel: Segment Routing with IPv6 Nov 6 00:22:35.973683 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:22:35.973691 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:22:35.973699 kernel: Key type dns_resolver registered Nov 6 00:22:35.973707 kernel: IPI shorthand broadcast: enabled Nov 6 00:22:35.973715 kernel: sched_clock: Marking stable (2656190434, 99395721)->(3059669733, -304083578) Nov 6 00:22:35.973723 kernel: registered taskstats version 1 Nov 6 00:22:35.973731 kernel: Loading compiled-in X.509 certificates Nov 6 00:22:35.973740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:22:35.973749 kernel: Demotion targets for Node 0: null Nov 6 00:22:35.973757 kernel: Key type .fscrypt registered Nov 6 00:22:35.973765 kernel: Key type fscrypt-provisioning registered Nov 6 00:22:35.973773 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:22:35.973781 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:22:35.973789 kernel: ima: No architecture policies found Nov 6 00:22:35.973796 kernel: clk: Disabling unused clocks Nov 6 00:22:35.973804 kernel: Warning: unable to open an initial console. Nov 6 00:22:35.973815 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:22:35.973823 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:22:35.973831 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:22:35.973839 kernel: Run /init as init process Nov 6 00:22:35.973847 kernel: with arguments: Nov 6 00:22:35.973855 kernel: /init Nov 6 00:22:35.973863 kernel: with environment: Nov 6 00:22:35.973871 kernel: HOME=/ Nov 6 00:22:35.973879 kernel: TERM=linux Nov 6 00:22:35.973891 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:22:35.973903 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:35.973912 systemd[1]: Detected virtualization microsoft. Nov 6 00:22:35.973921 systemd[1]: Detected architecture x86-64. Nov 6 00:22:35.973929 systemd[1]: Running in initrd. Nov 6 00:22:35.973937 systemd[1]: No hostname configured, using default hostname. Nov 6 00:22:35.973946 systemd[1]: Hostname set to . Nov 6 00:22:35.973956 systemd[1]: Initializing machine ID from random generator. Nov 6 00:22:35.973965 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:22:35.973974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:35.973982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:35.973992 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:22:35.974000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:35.974009 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:22:35.974020 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:22:35.974030 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:22:35.974038 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:22:35.974047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:35.974056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:35.974064 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:35.974073 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:35.974082 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:35.974092 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:35.974100 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:35.974109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:35.974117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:22:35.974126 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:22:35.974134 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:35.974143 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:35.974151 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:35.974160 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:35.974170 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:22:35.974179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:35.974187 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:22:35.974196 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:22:35.974205 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:22:35.974213 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:35.974222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:35.974231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:35.974248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:35.974276 systemd-journald[186]: Collecting audit messages is disabled. Nov 6 00:22:35.974301 systemd-journald[186]: Journal started Nov 6 00:22:35.974322 systemd-journald[186]: Runtime Journal (/run/log/journal/830f47572d7b4bf18df3e35b8c0bde73) is 8M, max 158.6M, 150.6M free. Nov 6 00:22:35.971189 systemd-modules-load[187]: Inserted module 'overlay' Nov 6 00:22:35.977682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:35.982941 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:35.986580 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:22:35.990729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:36.001664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:36.008716 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:22:36.010431 systemd-modules-load[187]: Inserted module 'br_netfilter' Nov 6 00:22:36.010595 kernel: Bridge firewalling registered Nov 6 00:22:36.014218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:36.016664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:36.024760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:36.029084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:36.032987 systemd-tmpfiles[199]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:22:36.035574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:36.040508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:36.045753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:22:36.057665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:36.064673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:36.071535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:36.075593 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:22:36.086455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:36.096977 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:36.121590 systemd-resolved[213]: Positive Trust Anchors: Nov 6 00:22:36.121600 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:36.121630 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:36.142165 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 6 00:22:36.142920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:36.146199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:36.171580 kernel: SCSI subsystem initialized Nov 6 00:22:36.177578 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:22:36.185584 kernel: iscsi: registered transport (tcp) Nov 6 00:22:36.202341 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:22:36.202377 kernel: QLogic iSCSI HBA Driver Nov 6 00:22:36.213944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:36.229310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:36.230005 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:36.258893 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:36.262660 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:22:36.306583 kernel: raid6: avx512x4 gen() 46416 MB/s Nov 6 00:22:36.323573 kernel: raid6: avx512x2 gen() 46089 MB/s Nov 6 00:22:36.340573 kernel: raid6: avx512x1 gen() 29964 MB/s Nov 6 00:22:36.357577 kernel: raid6: avx2x4 gen() 38847 MB/s Nov 6 00:22:36.375573 kernel: raid6: avx2x2 gen() 42490 MB/s Nov 6 00:22:36.393053 kernel: raid6: avx2x1 gen() 31313 MB/s Nov 6 00:22:36.393075 kernel: raid6: using algorithm avx512x4 gen() 46416 MB/s Nov 6 00:22:36.410931 kernel: raid6: .... xor() 7894 MB/s, rmw enabled Nov 6 00:22:36.410961 kernel: raid6: using avx512x2 recovery algorithm Nov 6 00:22:36.428583 kernel: xor: automatically using best checksumming function avx Nov 6 00:22:36.535578 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:22:36.539617 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:36.543202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:36.558448 systemd-udevd[435]: Using default interface naming scheme 'v255'. Nov 6 00:22:36.562021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:36.568348 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:22:36.583025 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Nov 6 00:22:36.599123 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:36.601419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:36.633532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:36.643659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:22:36.672595 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:22:36.690602 kernel: hv_vmbus: Vmbus version:5.3 Nov 6 00:22:36.702374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:36.707859 kernel: AES CTR mode by8 optimization enabled Nov 6 00:22:36.702508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:36.707644 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:36.713574 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 6 00:22:36.713593 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 6 00:22:36.717370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:36.738603 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 6 00:22:36.738633 kernel: PTP clock support registered Nov 6 00:22:36.744580 kernel: hv_vmbus: registering driver hv_pci Nov 6 00:22:36.750641 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 6 00:22:36.763587 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 00:22:36.768850 kernel: hv_utils: Registering HyperV Utility Driver Nov 6 00:22:36.768899 kernel: hv_vmbus: registering driver hv_utils Nov 6 00:22:36.769076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:36.775435 kernel: hv_vmbus: registering driver hv_storvsc Nov 6 00:22:36.775456 kernel: hv_vmbus: registering driver hid_hyperv Nov 6 00:22:36.775467 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 6 00:22:36.775600 kernel: hv_vmbus: registering driver hv_netvsc Nov 6 00:22:36.781221 kernel: hv_utils: Shutdown IC version 3.2 Nov 6 00:22:36.781254 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 6 00:22:36.784732 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 6 00:22:36.784863 kernel: hv_utils: Heartbeat IC version 3.0 Nov 6 00:22:36.786335 kernel: scsi host0: storvsc_host_t Nov 6 00:22:36.788361 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 6 00:22:36.791599 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 6 00:22:36.794484 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:22:36.794625 kernel: hv_utils: TimeSync IC version 4.0 Nov 6 00:22:37.311606 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 6 00:22:37.311676 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4104ee (unnamed net_device) (uninitialized): VF slot 1 added Nov 6 00:22:37.311879 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 6 00:22:37.311896 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 6 00:22:37.306379 systemd-resolved[213]: Clock change detected. Flushing caches. Nov 6 00:22:37.329297 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:22:37.329450 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 6 00:22:37.347786 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 6 00:22:37.347976 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:22:37.349369 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 6 00:22:37.353538 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 6 00:22:37.353679 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 6 00:22:37.365366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:37.379360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#73 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:37.508352 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 6 00:22:37.514356 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:37.833357 kernel: nvme nvme0: using unchecked data buffer Nov 6 00:22:38.017977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 6 00:22:38.061785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:22:38.071279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 6 00:22:38.148460 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:22:38.148558 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:22:38.148894 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:38.157958 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:38.159807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:38.160294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:38.160924 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:22:38.163437 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:22:38.187950 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:38.193383 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:38.338004 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 6 00:22:38.338173 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 6 00:22:38.340971 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 6 00:22:38.342463 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:22:38.347481 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 6 00:22:38.351502 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 6 00:22:38.362509 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 6 00:22:38.364359 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 6 00:22:38.379613 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:22:38.379733 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 6 00:22:38.382131 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 6 00:22:38.387057 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 6 00:22:38.397356 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 6 00:22:38.400692 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4104ee eth0: VF registering: eth1 Nov 6 00:22:38.400872 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 6 00:22:38.404364 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 6 00:22:39.204129 disk-uuid[653]: The operation has completed successfully. Nov 6 00:22:39.206177 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:22:39.255792 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:22:39.255877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:22:39.292457 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:22:39.309384 sh[696]: Success Nov 6 00:22:39.337822 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:22:39.337877 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:22:39.339505 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:22:39.348369 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:22:39.625139 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:22:39.629380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:22:39.642414 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:22:39.660349 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (709) Nov 6 00:22:39.660397 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:22:39.661706 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:40.050688 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:22:40.050780 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:22:40.051722 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:22:40.087453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:22:40.090783 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:40.091910 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:22:40.093910 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:22:40.109953 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:22:40.132859 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (732) Nov 6 00:22:40.132893 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:40.136337 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:40.184685 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:40.184778 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:40.185662 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:40.186198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:40.188753 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:40.198682 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:40.204447 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:22:40.211445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:22:40.225231 systemd-networkd[874]: lo: Link UP Nov 6 00:22:40.225239 systemd-networkd[874]: lo: Gained carrier Nov 6 00:22:40.230195 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:22:40.226250 systemd-networkd[874]: Enumeration completed Nov 6 00:22:40.235103 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:40.235321 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4104ee eth0: Data path switched to VF: enP30832s1 Nov 6 00:22:40.226320 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:40.226709 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:40.226712 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:40.229525 systemd[1]: Reached target network.target - Network. Nov 6 00:22:40.236013 systemd-networkd[874]: enP30832s1: Link UP Nov 6 00:22:40.236072 systemd-networkd[874]: eth0: Link UP Nov 6 00:22:40.236154 systemd-networkd[874]: eth0: Gained carrier Nov 6 00:22:40.236165 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:40.241172 systemd-networkd[874]: enP30832s1: Gained carrier Nov 6 00:22:40.261372 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:22:41.253715 ignition[879]: Ignition 2.22.0 Nov 6 00:22:41.253727 ignition[879]: Stage: fetch-offline Nov 6 00:22:41.256167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:41.253836 ignition[879]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:41.258869 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:22:41.253842 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:41.253926 ignition[879]: parsed url from cmdline: "" Nov 6 00:22:41.253929 ignition[879]: no config URL provided Nov 6 00:22:41.253933 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:41.253937 ignition[879]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:41.253941 ignition[879]: failed to fetch config: resource requires networking Nov 6 00:22:41.254963 ignition[879]: Ignition finished successfully Nov 6 00:22:41.287572 ignition[890]: Ignition 2.22.0 Nov 6 00:22:41.287582 ignition[890]: Stage: fetch Nov 6 00:22:41.287758 ignition[890]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:41.287765 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:41.287828 ignition[890]: parsed url from cmdline: "" Nov 6 00:22:41.287830 ignition[890]: no config URL provided Nov 6 00:22:41.287834 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:41.287838 ignition[890]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:41.287855 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 6 00:22:41.337439 ignition[890]: GET result: OK Nov 6 00:22:41.337495 ignition[890]: config has been read from IMDS userdata Nov 6 00:22:41.337520 ignition[890]: parsing config with SHA512: 768fb7008efa5849f448d933412e0e8a1b8afc10a3084317722e0d66ca269e27110e5b08e96d163514f4708a30d2c4ceccce50546918487173b93774f19de134 Nov 6 00:22:41.340784 unknown[890]: fetched base config from "system" Nov 6 00:22:41.340793 unknown[890]: fetched base config from "system" Nov 6 00:22:41.341072 ignition[890]: fetch: fetch complete Nov 6 00:22:41.340798 unknown[890]: fetched user config from "azure" Nov 6 00:22:41.341076 ignition[890]: fetch: fetch passed Nov 6 00:22:41.342889 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:22:41.341110 ignition[890]: Ignition finished successfully Nov 6 00:22:41.351649 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:22:41.380618 ignition[897]: Ignition 2.22.0 Nov 6 00:22:41.380629 ignition[897]: Stage: kargs Nov 6 00:22:41.380840 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:41.383823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:22:41.380848 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:41.387780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:22:41.381808 ignition[897]: kargs: kargs passed Nov 6 00:22:41.381848 ignition[897]: Ignition finished successfully Nov 6 00:22:41.408047 ignition[903]: Ignition 2.22.0 Nov 6 00:22:41.408056 ignition[903]: Stage: disks Nov 6 00:22:41.410157 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:22:41.408234 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:41.413698 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:41.408241 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:41.417377 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:22:41.409007 ignition[903]: disks: disks passed Nov 6 00:22:41.421177 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:41.409034 ignition[903]: Ignition finished successfully Nov 6 00:22:41.425856 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:41.428721 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:41.434641 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:22:41.506426 systemd-fsck[912]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 6 00:22:41.517773 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:22:41.523331 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:22:42.071484 systemd-networkd[874]: eth0: Gained IPv6LL Nov 6 00:22:43.575359 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:22:43.575492 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:22:43.577891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:43.654909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:43.673016 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:22:43.677466 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:22:43.683376 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (921) Nov 6 00:22:43.683836 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:22:43.692904 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:43.692926 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:43.683867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:43.693304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:22:43.698448 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:22:43.709359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:43.709397 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:43.709409 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:43.711710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:44.208847 coreos-metadata[923]: Nov 06 00:22:44.208 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:22:44.211123 coreos-metadata[923]: Nov 06 00:22:44.210 INFO Fetch successful Nov 6 00:22:44.211123 coreos-metadata[923]: Nov 06 00:22:44.210 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:22:44.218003 coreos-metadata[923]: Nov 06 00:22:44.217 INFO Fetch successful Nov 6 00:22:44.234627 coreos-metadata[923]: Nov 06 00:22:44.234 INFO wrote hostname ci-4459.1.0-n-1b1a1d3a2e to /sysroot/etc/hostname Nov 6 00:22:44.236317 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:44.493224 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:22:44.537559 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:22:44.572100 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:22:44.594094 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:22:45.669836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:45.672484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:22:45.686018 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:22:45.697283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:22:45.700511 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:45.720248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:22:45.728653 ignition[1039]: INFO : Ignition 2.22.0 Nov 6 00:22:45.728653 ignition[1039]: INFO : Stage: mount Nov 6 00:22:45.737449 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:45.737449 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:45.737449 ignition[1039]: INFO : mount: mount passed Nov 6 00:22:45.737449 ignition[1039]: INFO : Ignition finished successfully Nov 6 00:22:45.731186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:22:45.735407 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:22:45.747807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:45.762354 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1052) Nov 6 00:22:45.765365 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:45.765402 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:45.769405 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:22:45.769501 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:22:45.770778 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:22:45.772073 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:45.799795 ignition[1069]: INFO : Ignition 2.22.0 Nov 6 00:22:45.799795 ignition[1069]: INFO : Stage: files Nov 6 00:22:45.801945 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:45.801945 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:45.801945 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:22:45.815433 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:22:45.815433 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:22:45.915180 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:22:45.917108 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:22:45.917108 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:22:45.915468 unknown[1069]: wrote ssh authorized keys file for user: core Nov 6 00:22:46.008933 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:46.013397 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:22:46.057941 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:22:46.166921 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:46.172434 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:46.195376 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:22:46.554315 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:22:47.811697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:47.811697 ignition[1069]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:22:47.859068 ignition[1069]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:47.864203 ignition[1069]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:47.864203 ignition[1069]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:22:47.864203 ignition[1069]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:47.875207 ignition[1069]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:47.875207 ignition[1069]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:47.875207 ignition[1069]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:47.875207 ignition[1069]: INFO : files: files passed Nov 6 00:22:47.875207 ignition[1069]: INFO : Ignition finished successfully Nov 6 00:22:47.868868 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:22:47.873298 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:22:47.888447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:22:47.898664 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:22:47.898758 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:22:47.910885 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:47.910885 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:47.916942 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:47.915090 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:47.924507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:22:47.928128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:22:47.960766 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:22:47.960843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:22:47.965536 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:22:47.965789 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:22:47.971423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:22:47.971978 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:22:47.990246 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:47.993462 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:22:48.012164 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:48.012316 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:48.012488 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:22:48.012712 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:22:48.012796 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:48.020700 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:22:48.024497 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:22:48.024633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:22:48.024833 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:48.025080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:48.034436 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:48.037900 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:22:48.042533 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:48.054463 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:22:48.055653 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:22:48.057742 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:22:48.059994 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:22:48.060115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:48.062611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:48.065275 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:48.069442 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:22:48.069683 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:48.072608 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:22:48.072717 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:48.074568 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:22:48.074670 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:48.075235 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:22:48.075326 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:22:48.075652 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:22:48.075742 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:48.078449 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:22:48.080517 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:22:48.080774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:22:48.080885 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:48.081362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:22:48.081456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:48.124813 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:22:48.127163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:22:48.134389 ignition[1123]: INFO : Ignition 2.22.0 Nov 6 00:22:48.134389 ignition[1123]: INFO : Stage: umount Nov 6 00:22:48.134389 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:48.134389 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:22:48.134389 ignition[1123]: INFO : umount: umount passed Nov 6 00:22:48.134389 ignition[1123]: INFO : Ignition finished successfully Nov 6 00:22:48.139062 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:22:48.139563 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:22:48.139635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:22:48.151711 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:22:48.151786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:22:48.151897 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:22:48.151929 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:22:48.152227 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:22:48.152253 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:22:48.152288 systemd[1]: Stopped target network.target - Network. Nov 6 00:22:48.152923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:22:48.152954 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:48.152985 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:22:48.153005 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:22:48.157356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:48.166560 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:22:48.173644 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:22:48.176243 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:22:48.179133 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:48.195139 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:22:48.195178 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:48.199458 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:22:48.200929 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:22:48.205741 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:22:48.205779 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:48.210583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:22:48.211352 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:22:48.217324 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:22:48.217554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:22:48.223057 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:22:48.223844 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:22:48.230902 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:22:48.232884 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:22:48.232959 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:22:48.240737 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:22:48.243319 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:22:48.247999 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:22:48.248022 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:48.253678 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:22:48.253727 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:48.258430 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:22:48.260607 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:22:48.261051 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:48.261551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:22:48.261592 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:48.261916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:22:48.261949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:48.314803 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4104ee eth0: Data path switched from VF: enP30832s1 Nov 6 00:22:48.314960 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:48.262448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:22:48.262481 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:48.263834 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:48.282817 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:22:48.282871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:48.284578 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:22:48.284684 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:48.290937 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:22:48.290981 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:48.291389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:22:48.291411 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:48.291943 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:22:48.291975 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:48.292717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:22:48.292745 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:48.293059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:22:48.293087 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:48.306447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:22:48.310598 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:22:48.310646 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:48.342185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:22:48.342222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:48.348395 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:22:48.348425 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:48.358854 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:22:48.358898 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:48.363773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:48.363820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:48.367993 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:22:48.368040 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 6 00:22:48.368069 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:22:48.368099 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:48.368395 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:22:48.368460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:22:48.368810 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:22:48.368861 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:22:48.369805 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:22:48.371434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:22:48.400688 systemd[1]: Switching root. Nov 6 00:22:48.501145 systemd-journald[186]: Journal stopped Nov 6 00:22:55.889986 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Nov 6 00:22:55.890010 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:22:55.890022 kernel: SELinux: policy capability open_perms=1 Nov 6 00:22:55.890029 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:22:55.890036 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:22:55.890044 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:22:55.890053 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:22:55.890061 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:22:55.890070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:22:55.890078 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:22:55.890086 kernel: audit: type=1403 audit(1762388569.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:22:55.890094 systemd[1]: Successfully loaded SELinux policy in 182.408ms. Nov 6 00:22:55.890103 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.728ms. Nov 6 00:22:55.890111 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:55.890122 systemd[1]: Detected virtualization microsoft. Nov 6 00:22:55.890140 systemd[1]: Detected architecture x86-64. Nov 6 00:22:55.890148 systemd[1]: Detected first boot. Nov 6 00:22:55.890156 systemd[1]: Hostname set to . Nov 6 00:22:55.890164 systemd[1]: Initializing machine ID from random generator. Nov 6 00:22:55.890172 zram_generator::config[1165]: No configuration found. Nov 6 00:22:55.890182 kernel: Guest personality initialized and is inactive Nov 6 00:22:55.890189 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 6 00:22:55.890196 kernel: Initialized host personality Nov 6 00:22:55.890204 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:22:55.890212 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:22:55.890220 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:22:55.890229 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:22:55.890238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:22:55.890247 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:55.890255 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:22:55.890264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:22:55.890272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:22:55.890280 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:22:55.890288 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:22:55.890298 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:22:55.890307 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:22:55.890315 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:22:55.890324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:55.890332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:55.890355 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:22:55.890366 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:22:55.890375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:22:55.890384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:55.890395 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:22:55.890403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:55.890412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:55.890420 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:22:55.890428 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:22:55.890436 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:55.890445 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:22:55.890455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:55.890463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:55.890472 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:55.890480 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:55.890488 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:22:55.890497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:22:55.890507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:22:55.890515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:55.890524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:55.890533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:55.890542 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:22:55.890550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:22:55.890559 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:22:55.890568 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:22:55.890577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:55.890590 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:22:55.890599 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:22:55.890614 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:22:55.890623 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:22:55.890633 systemd[1]: Reached target machines.target - Containers. Nov 6 00:22:55.890642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:22:55.890652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:55.890682 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:55.890691 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:22:55.890699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:55.890708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:55.890716 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:55.890725 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:22:55.890733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:55.890742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:22:55.890753 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:22:55.890761 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:22:55.890769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:22:55.890778 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:22:55.890787 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:55.890795 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:55.890804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:55.890813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:55.890822 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:22:55.890831 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:22:55.890840 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:55.890849 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:22:55.890858 systemd[1]: Stopped verity-setup.service. Nov 6 00:22:55.890867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:55.890875 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:22:55.890884 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:22:55.890893 kernel: loop: module loaded Nov 6 00:22:55.890901 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:22:55.890909 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:22:55.890918 kernel: fuse: init (API version 7.41) Nov 6 00:22:55.890925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:22:55.890934 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:22:55.890943 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:55.890967 systemd-journald[1248]: Collecting audit messages is disabled. Nov 6 00:22:55.890989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:22:55.890998 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:22:55.891007 systemd-journald[1248]: Journal started Nov 6 00:22:55.891028 systemd-journald[1248]: Runtime Journal (/run/log/journal/5b8e53beeb05423db9f6604e58a3a2fc) is 8M, max 158.6M, 150.6M free. Nov 6 00:22:55.370495 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:22:55.377718 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 6 00:22:55.378095 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:22:55.897611 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:55.899186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:55.899332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:55.901530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:55.901736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:55.905739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:22:55.910101 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:22:55.910248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:22:55.912024 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:55.912150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:55.916566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:55.918073 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:55.921558 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:22:55.929611 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:55.932084 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:22:55.943421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:22:55.945644 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:22:55.945677 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:55.948263 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:22:55.954445 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:22:55.976450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:55.997354 kernel: ACPI: bus type drm_connector registered Nov 6 00:22:56.007927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:22:56.011844 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:22:56.014155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:56.016889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:22:56.018760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:56.020632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:56.023141 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:22:56.027460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:56.031894 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:56.039570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:56.042795 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:22:56.045545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:56.047657 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:22:56.052539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:22:56.063794 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:22:56.066556 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:22:56.069469 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:22:56.076597 kernel: loop0: detected capacity change from 0 to 229808 Nov 6 00:22:56.078670 systemd-journald[1248]: Time spent on flushing to /var/log/journal/5b8e53beeb05423db9f6604e58a3a2fc is 10.204ms for 993 entries. Nov 6 00:22:56.078670 systemd-journald[1248]: System Journal (/var/log/journal/5b8e53beeb05423db9f6604e58a3a2fc) is 8M, max 2.6G, 2.6G free. Nov 6 00:22:56.130484 systemd-journald[1248]: Received client request to flush runtime journal. Nov 6 00:22:56.131493 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:22:56.167368 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:22:56.167386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:56.231910 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:22:56.231925 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:22:56.234268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:56.240462 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:22:56.250371 kernel: loop1: detected capacity change from 0 to 27936 Nov 6 00:22:56.308402 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:22:56.379119 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:22:56.839375 kernel: loop2: detected capacity change from 0 to 110984 Nov 6 00:22:56.876688 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:22:56.967813 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:22:56.971487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:56.992562 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Nov 6 00:22:56.992580 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Nov 6 00:22:56.995054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:56.997813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:57.025328 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Nov 6 00:22:57.353368 kernel: loop3: detected capacity change from 0 to 128016 Nov 6 00:22:57.739495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:57.746481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:57.774458 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:22:57.824711 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:22:57.833365 kernel: loop4: detected capacity change from 0 to 229808 Nov 6 00:22:57.857370 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:22:57.859390 kernel: loop5: detected capacity change from 0 to 27936 Nov 6 00:22:57.861474 kernel: hv_vmbus: registering driver hyperv_fb Nov 6 00:22:57.869365 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 6 00:22:57.873410 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 6 00:22:57.875616 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:22:57.879379 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:22:57.886366 kernel: loop6: detected capacity change from 0 to 110984 Nov 6 00:22:57.898448 kernel: hv_vmbus: registering driver hv_balloon Nov 6 00:22:57.903389 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 6 00:22:57.907396 kernel: loop7: detected capacity change from 0 to 128016 Nov 6 00:22:57.920551 (sd-merge)[1371]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 6 00:22:57.923529 (sd-merge)[1371]: Merged extensions into '/usr'. Nov 6 00:22:57.931370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:22:57.933161 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:22:57.933380 systemd[1]: Reloading... Nov 6 00:22:58.033359 zram_generator::config[1428]: No configuration found. Nov 6 00:22:58.197516 systemd-networkd[1342]: lo: Link UP Nov 6 00:22:58.197523 systemd-networkd[1342]: lo: Gained carrier Nov 6 00:22:58.199041 systemd-networkd[1342]: Enumeration completed Nov 6 00:22:58.199392 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:58.199444 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:58.202361 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:22:58.206251 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:22:58.206818 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4104ee eth0: Data path switched to VF: enP30832s1 Nov 6 00:22:58.206510 systemd-networkd[1342]: enP30832s1: Link UP Nov 6 00:22:58.206573 systemd-networkd[1342]: eth0: Link UP Nov 6 00:22:58.206575 systemd-networkd[1342]: eth0: Gained carrier Nov 6 00:22:58.206590 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:58.212389 systemd-networkd[1342]: enP30832s1: Gained carrier Nov 6 00:22:58.219414 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:22:58.342355 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 6 00:22:58.391981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:22:58.393955 systemd[1]: Reloading finished in 459 ms. Nov 6 00:22:58.410029 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:22:58.411624 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:58.412907 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:22:58.458003 systemd[1]: Starting ensure-sysext.service... Nov 6 00:22:58.462459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:22:58.465863 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:22:58.469118 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:22:58.472360 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:58.475865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:58.486585 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:22:58.486837 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:22:58.487112 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:22:58.487431 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:22:58.488168 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:22:58.488527 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Nov 6 00:22:58.488619 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Nov 6 00:22:58.489274 systemd[1]: Reload requested from client PID 1503 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:22:58.489377 systemd[1]: Reloading... Nov 6 00:22:58.537528 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:58.537535 systemd-tmpfiles[1507]: Skipping /boot Nov 6 00:22:58.542359 zram_generator::config[1542]: No configuration found. Nov 6 00:22:58.553030 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:58.553040 systemd-tmpfiles[1507]: Skipping /boot Nov 6 00:22:58.707515 systemd[1]: Reloading finished in 217 ms. Nov 6 00:22:58.725064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:22:58.727383 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:22:58.727715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:58.734535 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:58.764804 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:22:58.768132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:22:58.773727 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:58.777563 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:22:58.782829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.782980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:58.784114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:58.788927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:58.793581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:58.795923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:58.796030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:58.796119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.801385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.801526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:58.801668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:58.801750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:58.801826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.808792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.809041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:58.810463 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:58.813562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:58.813666 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:58.813811 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:22:58.815888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:58.817383 systemd[1]: Finished ensure-sysext.service. Nov 6 00:22:58.820034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:58.820196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:58.826068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:58.826214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:58.829308 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:58.829909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:58.833194 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:22:58.839234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:58.839278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:58.840369 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:58.840528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:58.883496 systemd-resolved[1608]: Positive Trust Anchors: Nov 6 00:22:58.883506 systemd-resolved[1608]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:58.883534 systemd-resolved[1608]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:58.933131 systemd-resolved[1608]: Using system hostname 'ci-4459.1.0-n-1b1a1d3a2e'. Nov 6 00:22:58.947995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:58.948126 systemd[1]: Reached target network.target - Network. Nov 6 00:22:58.948299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:59.008546 augenrules[1641]: No rules Nov 6 00:22:59.009207 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:59.009456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:59.093527 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:22:59.416473 systemd-networkd[1342]: eth0: Gained IPv6LL Nov 6 00:22:59.418223 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:22:59.418494 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:22:59.911515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:23:01.611642 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:23:01.614592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:23:04.432094 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:23:04.441718 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:23:04.444274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:23:04.470307 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:23:04.473549 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:23:04.476466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:23:04.478102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:23:04.479794 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:23:04.481570 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:23:04.484427 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:23:04.485777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:23:04.487315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:23:04.487337 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:23:04.488560 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:23:04.520312 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:23:04.522591 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:23:04.526750 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:23:04.528382 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:23:04.530261 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:23:04.533531 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:23:04.535121 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:23:04.538895 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:23:04.540943 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:23:04.542283 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:23:04.543594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:23:04.543612 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:23:04.571692 systemd[1]: Starting chronyd.service - NTP client/server... Nov 6 00:23:04.575136 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:23:04.580847 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:23:04.585698 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:23:04.590470 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:23:04.595467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:23:04.599662 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:23:04.611095 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:23:04.612768 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:23:04.617428 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 6 00:23:04.620809 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 6 00:23:04.623495 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 6 00:23:04.626478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:04.632963 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:23:04.638465 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:23:04.642444 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:23:04.651775 jq[1664]: false Nov 6 00:23:04.653272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:23:04.659454 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:23:04.663877 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:23:04.666099 KVP[1669]: KVP starting; pid is:1669 Nov 6 00:23:04.668232 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:23:04.668662 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:23:04.670249 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:23:04.673190 chronyd[1658]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 6 00:23:04.674483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:23:04.675161 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Refreshing passwd entry cache Nov 6 00:23:04.673637 oslogin_cache_refresh[1668]: Refreshing passwd entry cache Nov 6 00:23:04.681539 kernel: hv_utils: KVP IC version 4.0 Nov 6 00:23:04.681378 KVP[1669]: KVP LIC Version: 3.1 Nov 6 00:23:04.683899 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:23:04.685021 extend-filesystems[1667]: Found /dev/nvme0n1p6 Nov 6 00:23:04.687665 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:23:04.687883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:23:04.694025 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:23:04.694602 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:23:04.700401 jq[1683]: true Nov 6 00:23:04.701319 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Failure getting users, quitting Nov 6 00:23:04.701316 oslogin_cache_refresh[1668]: Failure getting users, quitting Nov 6 00:23:04.701437 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:23:04.701331 oslogin_cache_refresh[1668]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:23:04.707130 oslogin_cache_refresh[1668]: Refreshing group entry cache Nov 6 00:23:04.708036 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Refreshing group entry cache Nov 6 00:23:04.705654 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:23:04.705886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:23:04.721624 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Failure getting groups, quitting Nov 6 00:23:04.721624 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:23:04.720792 oslogin_cache_refresh[1668]: Failure getting groups, quitting Nov 6 00:23:04.720800 oslogin_cache_refresh[1668]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:23:04.721968 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:23:04.721969 (ntainerd)[1697]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:23:04.722338 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:23:04.724565 chronyd[1658]: Timezone right/UTC failed leap second check, ignoring Nov 6 00:23:04.724918 systemd[1]: Started chronyd.service - NTP client/server. Nov 6 00:23:04.724703 chronyd[1658]: Loaded seccomp filter (level 2) Nov 6 00:23:04.728107 extend-filesystems[1667]: Found /dev/nvme0n1p9 Nov 6 00:23:04.731655 jq[1696]: true Nov 6 00:23:04.735363 extend-filesystems[1667]: Checking size of /dev/nvme0n1p9 Nov 6 00:23:04.776904 extend-filesystems[1667]: Old size kept for /dev/nvme0n1p9 Nov 6 00:23:04.777499 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:23:04.777688 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:23:04.781916 tar[1692]: linux-amd64/LICENSE Nov 6 00:23:04.782081 tar[1692]: linux-amd64/helm Nov 6 00:23:04.795279 update_engine[1682]: I20251106 00:23:04.795209 1682 main.cc:92] Flatcar Update Engine starting Nov 6 00:23:04.802253 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:23:04.868318 systemd-logind[1681]: New seat seat0. Nov 6 00:23:04.875388 bash[1733]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:23:04.871722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:23:04.879185 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:23:04.885284 systemd-logind[1681]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:23:04.885452 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:23:05.104255 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:23:05.127517 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:23:05.127883 dbus-daemon[1661]: [system] SELinux support is enabled Nov 6 00:23:05.130630 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:23:05.140436 update_engine[1682]: I20251106 00:23:05.139588 1682 update_check_scheduler.cc:74] Next update check in 9m31s Nov 6 00:23:05.140632 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:23:05.142060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:23:05.142089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:23:05.145469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:23:05.145488 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:23:05.149458 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 6 00:23:05.156310 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:23:05.158620 dbus-daemon[1661]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 00:23:05.159227 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:23:05.182807 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:23:05.182982 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:23:05.191028 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:23:05.216556 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 6 00:23:05.232451 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:23:05.236616 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:23:05.240604 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:23:05.242762 coreos-metadata[1660]: Nov 06 00:23:05.242 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:23:05.243248 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:23:05.249740 coreos-metadata[1660]: Nov 06 00:23:05.249 INFO Fetch successful Nov 6 00:23:05.249740 coreos-metadata[1660]: Nov 06 00:23:05.249 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 6 00:23:05.253093 coreos-metadata[1660]: Nov 06 00:23:05.253 INFO Fetch successful Nov 6 00:23:05.253597 coreos-metadata[1660]: Nov 06 00:23:05.253 INFO Fetching http://168.63.129.16/machine/dd68403e-7571-4dcd-bbaa-508c0d759a25/5dcbe58e%2D1711%2D47c7%2D9ccb%2Dadec9534620d.%5Fci%2D4459.1.0%2Dn%2D1b1a1d3a2e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 6 00:23:05.255160 coreos-metadata[1660]: Nov 06 00:23:05.255 INFO Fetch successful Nov 6 00:23:05.255160 coreos-metadata[1660]: Nov 06 00:23:05.255 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:23:05.262443 coreos-metadata[1660]: Nov 06 00:23:05.262 INFO Fetch successful Nov 6 00:23:05.293507 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:23:05.296218 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:23:05.312370 tar[1692]: linux-amd64/README.md Nov 6 00:23:05.332258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:23:05.403125 locksmithd[1779]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:23:05.527335 containerd[1697]: time="2025-11-06T00:23:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:23:05.529378 containerd[1697]: time="2025-11-06T00:23:05.528604062Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:23:05.537895 containerd[1697]: time="2025-11-06T00:23:05.537865911Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.587µs" Nov 6 00:23:05.537970 containerd[1697]: time="2025-11-06T00:23:05.537959126Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:23:05.538008 containerd[1697]: time="2025-11-06T00:23:05.538001095Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:23:05.538147 containerd[1697]: time="2025-11-06T00:23:05.538138933Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:23:05.538185 containerd[1697]: time="2025-11-06T00:23:05.538177950Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:23:05.538235 containerd[1697]: time="2025-11-06T00:23:05.538228576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538303 containerd[1697]: time="2025-11-06T00:23:05.538294204Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538332 containerd[1697]: time="2025-11-06T00:23:05.538324778Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538569 containerd[1697]: time="2025-11-06T00:23:05.538558831Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538600 containerd[1697]: time="2025-11-06T00:23:05.538593971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538629 containerd[1697]: time="2025-11-06T00:23:05.538622661Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538662 containerd[1697]: time="2025-11-06T00:23:05.538656417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:23:05.538754 containerd[1697]: time="2025-11-06T00:23:05.538732025Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.538881958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.538907676Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.538917005Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.538945263Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.539208038Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:23:05.539705 containerd[1697]: time="2025-11-06T00:23:05.539252802Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:23:05.551370 containerd[1697]: time="2025-11-06T00:23:05.551274106Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:23:05.551370 containerd[1697]: time="2025-11-06T00:23:05.551328802Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:23:05.551370 containerd[1697]: time="2025-11-06T00:23:05.551373451Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551385750Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551397922Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551407267Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551425630Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551437412Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551453654Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:23:05.551468 containerd[1697]: time="2025-11-06T00:23:05.551463616Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:23:05.551586 containerd[1697]: time="2025-11-06T00:23:05.551471890Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:23:05.551586 containerd[1697]: time="2025-11-06T00:23:05.551483239Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:23:05.551586 containerd[1697]: time="2025-11-06T00:23:05.551574941Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:23:05.551639 containerd[1697]: time="2025-11-06T00:23:05.551591610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:23:05.551639 containerd[1697]: time="2025-11-06T00:23:05.551604321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:23:05.551639 containerd[1697]: time="2025-11-06T00:23:05.551614539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:23:05.551639 containerd[1697]: time="2025-11-06T00:23:05.551623660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:23:05.551639 containerd[1697]: time="2025-11-06T00:23:05.551633308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:23:05.551717 containerd[1697]: time="2025-11-06T00:23:05.551643226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:23:05.551717 containerd[1697]: time="2025-11-06T00:23:05.551652998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:23:05.551717 containerd[1697]: time="2025-11-06T00:23:05.551662975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:23:05.551717 containerd[1697]: time="2025-11-06T00:23:05.551672020Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:23:05.551717 containerd[1697]: time="2025-11-06T00:23:05.551681391Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:23:05.551803 containerd[1697]: time="2025-11-06T00:23:05.551734808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:23:05.551803 containerd[1697]: time="2025-11-06T00:23:05.551745762Z" level=info msg="Start snapshots syncer" Nov 6 00:23:05.551803 containerd[1697]: time="2025-11-06T00:23:05.551768415Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:23:05.552084 containerd[1697]: time="2025-11-06T00:23:05.552046839Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:23:05.552206 containerd[1697]: time="2025-11-06T00:23:05.552102066Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:23:05.552206 containerd[1697]: time="2025-11-06T00:23:05.552185594Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552335746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552371318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552381286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552393242Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552404479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552414650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552423688Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552444500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552453764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552462592Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552496204Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552508942Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:23:05.553084 containerd[1697]: time="2025-11-06T00:23:05.552517364Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552525881Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552533500Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552542008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552584362Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552601253Z" level=info msg="runtime interface created" Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552606231Z" level=info msg="created NRI interface" Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552613908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552624322Z" level=info msg="Connect containerd service" Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.552644568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:23:05.553405 containerd[1697]: time="2025-11-06T00:23:05.553355534Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:23:05.908096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:05.964718 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.290880422Z" level=info msg="Start subscribing containerd event" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.290930389Z" level=info msg="Start recovering state" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291024104Z" level=info msg="Start event monitor" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291034849Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291043596Z" level=info msg="Start streaming server" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291053574Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291061362Z" level=info msg="runtime interface starting up..." Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291066879Z" level=info msg="starting plugins..." Nov 6 00:23:06.291169 containerd[1697]: time="2025-11-06T00:23:06.291077566Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:23:06.291876 containerd[1697]: time="2025-11-06T00:23:06.291465106Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:23:06.291876 containerd[1697]: time="2025-11-06T00:23:06.291507249Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:23:06.291876 containerd[1697]: time="2025-11-06T00:23:06.291546427Z" level=info msg="containerd successfully booted in 0.764756s" Nov 6 00:23:06.291801 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:23:06.293912 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:23:06.298800 systemd[1]: Startup finished in 2.774s (kernel) + 13.462s (initrd) + 16.632s (userspace) = 32.869s. Nov 6 00:23:06.502485 kubelet[1819]: E1106 00:23:06.502440 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:06.504044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:06.504158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:06.504445 systemd[1]: kubelet.service: Consumed 812ms CPU time, 267.3M memory peak. Nov 6 00:23:06.912557 login[1792]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 6 00:23:06.927837 login[1793]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:23:06.937779 systemd-logind[1681]: New session 1 of user core. Nov 6 00:23:06.938667 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:23:06.939678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:23:06.974660 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:23:06.976604 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:23:07.005261 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:23:07.006886 systemd-logind[1681]: New session c1 of user core. Nov 6 00:23:07.313603 systemd[1838]: Queued start job for default target default.target. Nov 6 00:23:07.322068 systemd[1838]: Created slice app.slice - User Application Slice. Nov 6 00:23:07.322094 systemd[1838]: Reached target paths.target - Paths. Nov 6 00:23:07.322394 systemd[1838]: Reached target timers.target - Timers. Nov 6 00:23:07.323945 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:23:07.333114 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:23:07.334008 systemd[1838]: Reached target sockets.target - Sockets. Nov 6 00:23:07.334055 systemd[1838]: Reached target basic.target - Basic System. Nov 6 00:23:07.334114 systemd[1838]: Reached target default.target - Main User Target. Nov 6 00:23:07.334135 systemd[1838]: Startup finished in 323ms. Nov 6 00:23:07.334266 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:23:07.339487 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:23:07.416678 waagent[1790]: 2025-11-06T00:23:07.416623Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.416877Z INFO Daemon Daemon OS: flatcar 4459.1.0 Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.417329Z INFO Daemon Daemon Python: 3.11.13 Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.417540Z INFO Daemon Daemon Run daemon Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.417730Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.417967Z INFO Daemon Daemon Using waagent for provisioning Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.418107Z INFO Daemon Daemon Activate resource disk Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.418286Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.419676Z INFO Daemon Daemon Found device: None Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.419801Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.419857Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.420425Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.420695Z INFO Daemon Daemon Running default provisioning handler Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.426221Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.426713Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 6 00:23:07.426747 waagent[1790]: 2025-11-06T00:23:07.426804Z INFO Daemon Daemon cloud-init is enabled: False Nov 6 00:23:07.439771 waagent[1790]: 2025-11-06T00:23:07.426951Z INFO Daemon Daemon Copying ovf-env.xml Nov 6 00:23:07.547511 waagent[1790]: 2025-11-06T00:23:07.547463Z INFO Daemon Daemon Successfully mounted dvd Nov 6 00:23:07.571854 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 6 00:23:07.573679 waagent[1790]: 2025-11-06T00:23:07.573631Z INFO Daemon Daemon Detect protocol endpoint Nov 6 00:23:07.574236 waagent[1790]: 2025-11-06T00:23:07.574207Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:23:07.576503 waagent[1790]: 2025-11-06T00:23:07.576469Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 6 00:23:07.578260 waagent[1790]: 2025-11-06T00:23:07.578117Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 6 00:23:07.579409 waagent[1790]: 2025-11-06T00:23:07.579377Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 6 00:23:07.580471 waagent[1790]: 2025-11-06T00:23:07.580447Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 6 00:23:07.590498 waagent[1790]: 2025-11-06T00:23:07.590468Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 6 00:23:07.592100 waagent[1790]: 2025-11-06T00:23:07.592080Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 6 00:23:07.593444 waagent[1790]: 2025-11-06T00:23:07.593257Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 6 00:23:07.730657 waagent[1790]: 2025-11-06T00:23:07.730609Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 6 00:23:07.731872 waagent[1790]: 2025-11-06T00:23:07.731188Z INFO Daemon Daemon Forcing an update of the goal state. Nov 6 00:23:07.741419 waagent[1790]: 2025-11-06T00:23:07.741384Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:23:07.751736 waagent[1790]: 2025-11-06T00:23:07.751709Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 6 00:23:07.753010 waagent[1790]: 2025-11-06T00:23:07.752974Z INFO Daemon Nov 6 00:23:07.753628 waagent[1790]: 2025-11-06T00:23:07.753564Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 054fbf73-fbc3-4cd3-b764-fc80840570c4 eTag: 16150700700478727809 source: Fabric] Nov 6 00:23:07.755842 waagent[1790]: 2025-11-06T00:23:07.755811Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 6 00:23:07.757423 waagent[1790]: 2025-11-06T00:23:07.757396Z INFO Daemon Nov 6 00:23:07.757989 waagent[1790]: 2025-11-06T00:23:07.757964Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:23:07.766242 waagent[1790]: 2025-11-06T00:23:07.766217Z INFO Daemon Daemon Downloading artifacts profile blob Nov 6 00:23:07.833521 waagent[1790]: 2025-11-06T00:23:07.833452Z INFO Daemon Downloaded certificate {'thumbprint': 'A5E6D00860C00B99AD6D2C2393B010B315C942A8', 'hasPrivateKey': True} Nov 6 00:23:07.835577 waagent[1790]: 2025-11-06T00:23:07.835546Z INFO Daemon Fetch goal state completed Nov 6 00:23:07.842242 waagent[1790]: 2025-11-06T00:23:07.842216Z INFO Daemon Daemon Starting provisioning Nov 6 00:23:07.843210 waagent[1790]: 2025-11-06T00:23:07.843179Z INFO Daemon Daemon Handle ovf-env.xml. Nov 6 00:23:07.843663 waagent[1790]: 2025-11-06T00:23:07.843597Z INFO Daemon Daemon Set hostname [ci-4459.1.0-n-1b1a1d3a2e] Nov 6 00:23:07.873418 waagent[1790]: 2025-11-06T00:23:07.873381Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-n-1b1a1d3a2e] Nov 6 00:23:07.874925 waagent[1790]: 2025-11-06T00:23:07.874890Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 6 00:23:07.876293 waagent[1790]: 2025-11-06T00:23:07.876269Z INFO Daemon Daemon Primary interface is [eth0] Nov 6 00:23:07.882948 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:23:07.882955 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:23:07.882976 systemd-networkd[1342]: eth0: DHCP lease lost Nov 6 00:23:07.883730 waagent[1790]: 2025-11-06T00:23:07.883694Z INFO Daemon Daemon Create user account if not exists Nov 6 00:23:07.884009 waagent[1790]: 2025-11-06T00:23:07.883872Z INFO Daemon Daemon User core already exists, skip useradd Nov 6 00:23:07.884044 waagent[1790]: 2025-11-06T00:23:07.884013Z INFO Daemon Daemon Configure sudoer Nov 6 00:23:07.897445 waagent[1790]: 2025-11-06T00:23:07.897364Z INFO Daemon Daemon Configure sshd Nov 6 00:23:07.900559 waagent[1790]: 2025-11-06T00:23:07.900525Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 6 00:23:07.902177 waagent[1790]: 2025-11-06T00:23:07.900651Z INFO Daemon Daemon Deploy ssh public key. Nov 6 00:23:07.912794 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:23:07.913392 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:23:07.917136 systemd-logind[1681]: New session 2 of user core. Nov 6 00:23:07.927494 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:23:09.029105 waagent[1790]: 2025-11-06T00:23:09.029058Z INFO Daemon Daemon Provisioning complete Nov 6 00:23:09.042496 waagent[1790]: 2025-11-06T00:23:09.042464Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 6 00:23:09.047503 waagent[1790]: 2025-11-06T00:23:09.042673Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 6 00:23:09.047503 waagent[1790]: 2025-11-06T00:23:09.042853Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 6 00:23:09.137361 waagent[1888]: 2025-11-06T00:23:09.137294Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 6 00:23:09.137597 waagent[1888]: 2025-11-06T00:23:09.137400Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Nov 6 00:23:09.137597 waagent[1888]: 2025-11-06T00:23:09.137446Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 6 00:23:09.137597 waagent[1888]: 2025-11-06T00:23:09.137485Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 6 00:23:09.197334 waagent[1888]: 2025-11-06T00:23:09.197286Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 6 00:23:09.197474 waagent[1888]: 2025-11-06T00:23:09.197449Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:23:09.197519 waagent[1888]: 2025-11-06T00:23:09.197499Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:23:09.205992 waagent[1888]: 2025-11-06T00:23:09.205942Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:23:09.212263 waagent[1888]: 2025-11-06T00:23:09.212237Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 6 00:23:09.212574 waagent[1888]: 2025-11-06T00:23:09.212548Z INFO ExtHandler Nov 6 00:23:09.212610 waagent[1888]: 2025-11-06T00:23:09.212596Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e6793949-b06d-4683-a9a9-988b153687aa eTag: 16150700700478727809 source: Fabric] Nov 6 00:23:09.212779 waagent[1888]: 2025-11-06T00:23:09.212757Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 6 00:23:09.213084 waagent[1888]: 2025-11-06T00:23:09.213060Z INFO ExtHandler Nov 6 00:23:09.213116 waagent[1888]: 2025-11-06T00:23:09.213097Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:23:09.215796 waagent[1888]: 2025-11-06T00:23:09.215772Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 6 00:23:09.282502 waagent[1888]: 2025-11-06T00:23:09.282432Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A5E6D00860C00B99AD6D2C2393B010B315C942A8', 'hasPrivateKey': True} Nov 6 00:23:09.282766 waagent[1888]: 2025-11-06T00:23:09.282741Z INFO ExtHandler Fetch goal state completed Nov 6 00:23:09.295405 waagent[1888]: 2025-11-06T00:23:09.295336Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 6 00:23:09.299033 waagent[1888]: 2025-11-06T00:23:09.298987Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1888 Nov 6 00:23:09.299133 waagent[1888]: 2025-11-06T00:23:09.299109Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 6 00:23:09.299329 waagent[1888]: 2025-11-06T00:23:09.299310Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 6 00:23:09.300201 waagent[1888]: 2025-11-06T00:23:09.300169Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 6 00:23:09.300475 waagent[1888]: 2025-11-06T00:23:09.300451Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 6 00:23:09.300577 waagent[1888]: 2025-11-06T00:23:09.300558Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 6 00:23:09.300920 waagent[1888]: 2025-11-06T00:23:09.300898Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 6 00:23:09.373961 waagent[1888]: 2025-11-06T00:23:09.373935Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 6 00:23:09.374088 waagent[1888]: 2025-11-06T00:23:09.374067Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 6 00:23:09.379141 waagent[1888]: 2025-11-06T00:23:09.378820Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 6 00:23:09.383255 systemd[1]: Reload requested from client PID 1903 ('systemctl') (unit waagent.service)... Nov 6 00:23:09.383266 systemd[1]: Reloading... Nov 6 00:23:09.456543 zram_generator::config[1942]: No configuration found. Nov 6 00:23:09.596358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 6 00:23:09.610337 systemd[1]: Reloading finished in 226 ms. Nov 6 00:23:09.622359 waagent[1888]: 2025-11-06T00:23:09.622260Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 6 00:23:09.622412 waagent[1888]: 2025-11-06T00:23:09.622383Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 6 00:23:10.105013 waagent[1888]: 2025-11-06T00:23:10.104915Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 6 00:23:10.105211 waagent[1888]: 2025-11-06T00:23:10.105186Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 6 00:23:10.105871 waagent[1888]: 2025-11-06T00:23:10.105837Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 6 00:23:10.106008 waagent[1888]: 2025-11-06T00:23:10.105984Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:23:10.106063 waagent[1888]: 2025-11-06T00:23:10.106038Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:23:10.106251 waagent[1888]: 2025-11-06T00:23:10.106229Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 6 00:23:10.107095 waagent[1888]: 2025-11-06T00:23:10.106376Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 6 00:23:10.107095 waagent[1888]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 6 00:23:10.107095 waagent[1888]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 6 00:23:10.107095 waagent[1888]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 6 00:23:10.107095 waagent[1888]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:23:10.107095 waagent[1888]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:23:10.107095 waagent[1888]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:23:10.107095 waagent[1888]: 2025-11-06T00:23:10.106526Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 6 00:23:10.107095 waagent[1888]: 2025-11-06T00:23:10.106972Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 6 00:23:10.107320 waagent[1888]: 2025-11-06T00:23:10.106692Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 6 00:23:10.107320 waagent[1888]: 2025-11-06T00:23:10.107212Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 6 00:23:10.107320 waagent[1888]: 2025-11-06T00:23:10.107261Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 6 00:23:10.107488 waagent[1888]: 2025-11-06T00:23:10.107467Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:23:10.107546 waagent[1888]: 2025-11-06T00:23:10.107514Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 6 00:23:10.108229 waagent[1888]: 2025-11-06T00:23:10.108208Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:23:10.108387 waagent[1888]: 2025-11-06T00:23:10.108329Z INFO EnvHandler ExtHandler Configure routes Nov 6 00:23:10.108427 waagent[1888]: 2025-11-06T00:23:10.108406Z INFO EnvHandler ExtHandler Gateway:None Nov 6 00:23:10.108453 waagent[1888]: 2025-11-06T00:23:10.108438Z INFO EnvHandler ExtHandler Routes:None Nov 6 00:23:10.114019 waagent[1888]: 2025-11-06T00:23:10.113991Z INFO ExtHandler ExtHandler Nov 6 00:23:10.114076 waagent[1888]: 2025-11-06T00:23:10.114043Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: aa67a669-aeae-4cee-8906-79da80b455dc correlation f6442350-7591-449f-99e8-29a1a8c3a410 created: 2025-11-06T00:21:52.087311Z] Nov 6 00:23:10.114294 waagent[1888]: 2025-11-06T00:23:10.114273Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 6 00:23:10.114648 waagent[1888]: 2025-11-06T00:23:10.114627Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 6 00:23:10.178244 waagent[1888]: 2025-11-06T00:23:10.177814Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 6 00:23:10.178244 waagent[1888]: Try `iptables -h' or 'iptables --help' for more information.) Nov 6 00:23:10.178244 waagent[1888]: 2025-11-06T00:23:10.178165Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 519C7B80-176D-4CDC-AC94-C406EF3E5262;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 6 00:23:10.207962 waagent[1888]: 2025-11-06T00:23:10.207899Z INFO MonitorHandler ExtHandler Network interfaces: Nov 6 00:23:10.207962 waagent[1888]: Executing ['ip', '-a', '-o', 'link']: Nov 6 00:23:10.207962 waagent[1888]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 6 00:23:10.207962 waagent[1888]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:04:ee brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 6 00:23:10.207962 waagent[1888]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:41:04:ee brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 6 00:23:10.207962 waagent[1888]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 6 00:23:10.207962 waagent[1888]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 6 00:23:10.207962 waagent[1888]: 2: eth0 inet 10.200.8.43/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 6 00:23:10.207962 waagent[1888]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 6 00:23:10.207962 waagent[1888]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 6 00:23:10.207962 waagent[1888]: 2: eth0 inet6 fe80::7eed:8dff:fe41:4ee/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 6 00:23:10.338804 waagent[1888]: 2025-11-06T00:23:10.338765Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 6 00:23:10.338804 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.338804 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.338804 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.338804 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.338804 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.338804 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.338804 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:23:10.338804 waagent[1888]: 3 535 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:23:10.338804 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:23:10.341045 waagent[1888]: 2025-11-06T00:23:10.341004Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 6 00:23:10.341045 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.341045 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.341045 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.341045 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.341045 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:23:10.341045 waagent[1888]: pkts bytes target prot opt in out source destination Nov 6 00:23:10.341045 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:23:10.341045 waagent[1888]: 4 587 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:23:10.341045 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:23:16.754928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:23:16.756202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:17.268190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:17.277549 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:17.313385 kubelet[2040]: E1106 00:23:17.313354 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:17.316060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:17.316188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:17.316503 systemd[1]: kubelet.service: Consumed 121ms CPU time, 109.2M memory peak. Nov 6 00:23:27.548245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:23:27.549630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:27.986985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:27.989767 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:28.023321 kubelet[2055]: E1106 00:23:28.023283 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:28.024807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:28.024908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:28.025309 systemd[1]: kubelet.service: Consumed 117ms CPU time, 110.6M memory peak. Nov 6 00:23:28.507921 chronyd[1658]: Selected source PHC0 Nov 6 00:23:38.048337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:23:38.049678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:38.357262 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:23:38.358234 systemd[1]: Started sshd@0-10.200.8.43:22-10.200.16.10:41558.service - OpenSSH per-connection server daemon (10.200.16.10:41558). Nov 6 00:23:38.685189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:38.697609 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:38.729411 kubelet[2074]: E1106 00:23:38.729379 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:38.730785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:38.730912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:38.731155 systemd[1]: kubelet.service: Consumed 112ms CPU time, 108.4M memory peak. Nov 6 00:23:39.145351 sshd[2066]: Accepted publickey for core from 10.200.16.10 port 41558 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:39.146245 sshd-session[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:39.150124 systemd-logind[1681]: New session 3 of user core. Nov 6 00:23:39.156460 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:23:39.708743 systemd[1]: Started sshd@1-10.200.8.43:22-10.200.16.10:41564.service - OpenSSH per-connection server daemon (10.200.16.10:41564). Nov 6 00:23:40.333740 sshd[2084]: Accepted publickey for core from 10.200.16.10 port 41564 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:40.334680 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:40.338463 systemd-logind[1681]: New session 4 of user core. Nov 6 00:23:40.342464 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:23:40.775368 sshd[2087]: Connection closed by 10.200.16.10 port 41564 Nov 6 00:23:40.775742 sshd-session[2084]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:40.778054 systemd[1]: sshd@1-10.200.8.43:22-10.200.16.10:41564.service: Deactivated successfully. Nov 6 00:23:40.779761 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:23:40.780980 systemd-logind[1681]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:23:40.781690 systemd-logind[1681]: Removed session 4. Nov 6 00:23:40.897391 systemd[1]: Started sshd@2-10.200.8.43:22-10.200.16.10:54032.service - OpenSSH per-connection server daemon (10.200.16.10:54032). Nov 6 00:23:41.531809 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 54032 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:41.532799 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:41.536450 systemd-logind[1681]: New session 5 of user core. Nov 6 00:23:41.542487 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:23:41.973554 sshd[2096]: Connection closed by 10.200.16.10 port 54032 Nov 6 00:23:41.973995 sshd-session[2093]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:41.976834 systemd[1]: sshd@2-10.200.8.43:22-10.200.16.10:54032.service: Deactivated successfully. Nov 6 00:23:41.978133 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:23:41.978819 systemd-logind[1681]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:23:41.979733 systemd-logind[1681]: Removed session 5. Nov 6 00:23:42.087319 systemd[1]: Started sshd@3-10.200.8.43:22-10.200.16.10:54042.service - OpenSSH per-connection server daemon (10.200.16.10:54042). Nov 6 00:23:42.711576 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 54042 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:42.712507 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:42.716253 systemd-logind[1681]: New session 6 of user core. Nov 6 00:23:42.722458 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:23:43.152402 sshd[2105]: Connection closed by 10.200.16.10 port 54042 Nov 6 00:23:43.152807 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:43.154961 systemd[1]: sshd@3-10.200.8.43:22-10.200.16.10:54042.service: Deactivated successfully. Nov 6 00:23:43.156184 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:23:43.157233 systemd-logind[1681]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:23:43.158062 systemd-logind[1681]: Removed session 6. Nov 6 00:23:43.273269 systemd[1]: Started sshd@4-10.200.8.43:22-10.200.16.10:54054.service - OpenSSH per-connection server daemon (10.200.16.10:54054). Nov 6 00:23:43.903187 sshd[2111]: Accepted publickey for core from 10.200.16.10 port 54054 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:43.904099 sshd-session[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:43.907745 systemd-logind[1681]: New session 7 of user core. Nov 6 00:23:43.916484 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:23:44.468061 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:23:44.468258 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:44.496903 sudo[2115]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:44.598257 sshd[2114]: Connection closed by 10.200.16.10 port 54054 Nov 6 00:23:44.598783 sshd-session[2111]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:44.601291 systemd[1]: sshd@4-10.200.8.43:22-10.200.16.10:54054.service: Deactivated successfully. Nov 6 00:23:44.602618 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:23:44.604071 systemd-logind[1681]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:23:44.604809 systemd-logind[1681]: Removed session 7. Nov 6 00:23:44.713809 systemd[1]: Started sshd@5-10.200.8.43:22-10.200.16.10:54064.service - OpenSSH per-connection server daemon (10.200.16.10:54064). Nov 6 00:23:45.340595 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 54064 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:45.341572 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:45.345579 systemd-logind[1681]: New session 8 of user core. Nov 6 00:23:45.354493 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:23:45.684138 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:23:45.684511 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:45.690028 sudo[2126]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:45.693648 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:23:45.693832 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:45.700801 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:23:45.728515 augenrules[2148]: No rules Nov 6 00:23:45.729335 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:23:45.729519 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:23:45.730248 sudo[2125]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:45.830578 sshd[2124]: Connection closed by 10.200.16.10 port 54064 Nov 6 00:23:45.830906 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:45.833031 systemd[1]: sshd@5-10.200.8.43:22-10.200.16.10:54064.service: Deactivated successfully. Nov 6 00:23:45.834458 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:23:45.835811 systemd-logind[1681]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:23:45.836519 systemd-logind[1681]: Removed session 8. Nov 6 00:23:45.942331 systemd[1]: Started sshd@6-10.200.8.43:22-10.200.16.10:54072.service - OpenSSH per-connection server daemon (10.200.16.10:54072). Nov 6 00:23:46.049456 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 6 00:23:46.566953 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 54072 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:23:46.567871 sshd-session[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:46.571945 systemd-logind[1681]: New session 9 of user core. Nov 6 00:23:46.577501 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:23:46.910166 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:23:46.910386 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:48.781959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 00:23:48.783545 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:23:48.786509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:48.794754 (dockerd)[2179]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:23:49.475160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:49.477824 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:49.509495 kubelet[2192]: E1106 00:23:49.509467 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:49.510558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:49.510646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:49.510876 systemd[1]: kubelet.service: Consumed 118ms CPU time, 110.8M memory peak. Nov 6 00:23:49.947993 dockerd[2179]: time="2025-11-06T00:23:49.947946454Z" level=info msg="Starting up" Nov 6 00:23:49.949173 dockerd[2179]: time="2025-11-06T00:23:49.948814134Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:23:49.959233 dockerd[2179]: time="2025-11-06T00:23:49.959199908Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:23:50.106726 dockerd[2179]: time="2025-11-06T00:23:50.106556182Z" level=info msg="Loading containers: start." Nov 6 00:23:50.157360 kernel: Initializing XFRM netlink socket Nov 6 00:23:50.429686 update_engine[1682]: I20251106 00:23:50.429644 1682 update_attempter.cc:509] Updating boot flags... Nov 6 00:23:50.646736 systemd-networkd[1342]: docker0: Link UP Nov 6 00:23:50.669422 dockerd[2179]: time="2025-11-06T00:23:50.669390107Z" level=info msg="Loading containers: done." Nov 6 00:23:50.680408 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3384059094-merged.mount: Deactivated successfully. Nov 6 00:23:50.699056 dockerd[2179]: time="2025-11-06T00:23:50.699027317Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:23:50.699162 dockerd[2179]: time="2025-11-06T00:23:50.699098379Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:23:50.699186 dockerd[2179]: time="2025-11-06T00:23:50.699165039Z" level=info msg="Initializing buildkit" Nov 6 00:23:50.749767 dockerd[2179]: time="2025-11-06T00:23:50.749727866Z" level=info msg="Completed buildkit initialization" Nov 6 00:23:50.757237 dockerd[2179]: time="2025-11-06T00:23:50.757198689Z" level=info msg="Daemon has completed initialization" Nov 6 00:23:50.757404 dockerd[2179]: time="2025-11-06T00:23:50.757272683Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:23:50.757496 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:23:51.863627 containerd[1697]: time="2025-11-06T00:23:51.863585996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:23:52.792829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005926339.mount: Deactivated successfully. Nov 6 00:23:54.309378 containerd[1697]: time="2025-11-06T00:23:54.309316595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:54.311558 containerd[1697]: time="2025-11-06T00:23:54.311449807Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Nov 6 00:23:54.313701 containerd[1697]: time="2025-11-06T00:23:54.313678477Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:54.317396 containerd[1697]: time="2025-11-06T00:23:54.317368697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:54.318149 containerd[1697]: time="2025-11-06T00:23:54.318013368Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.454388192s" Nov 6 00:23:54.318149 containerd[1697]: time="2025-11-06T00:23:54.318044017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:23:54.318734 containerd[1697]: time="2025-11-06T00:23:54.318710070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:23:55.996532 containerd[1697]: time="2025-11-06T00:23:55.996493921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:55.999784 containerd[1697]: time="2025-11-06T00:23:55.999756605Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Nov 6 00:23:56.002448 containerd[1697]: time="2025-11-06T00:23:56.002408912Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:56.008091 containerd[1697]: time="2025-11-06T00:23:56.007523610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:56.008091 containerd[1697]: time="2025-11-06T00:23:56.007929337Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.689195134s" Nov 6 00:23:56.008091 containerd[1697]: time="2025-11-06T00:23:56.007952998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:23:56.008628 containerd[1697]: time="2025-11-06T00:23:56.008610085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:23:57.559864 containerd[1697]: time="2025-11-06T00:23:57.559817257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:57.561877 containerd[1697]: time="2025-11-06T00:23:57.561840821Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Nov 6 00:23:57.564150 containerd[1697]: time="2025-11-06T00:23:57.564113504Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:57.569070 containerd[1697]: time="2025-11-06T00:23:57.568280846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:57.569070 containerd[1697]: time="2025-11-06T00:23:57.568907671Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.560272369s" Nov 6 00:23:57.569070 containerd[1697]: time="2025-11-06T00:23:57.568948156Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:23:57.569528 containerd[1697]: time="2025-11-06T00:23:57.569498988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:23:58.508091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104511413.mount: Deactivated successfully. Nov 6 00:23:58.876219 containerd[1697]: time="2025-11-06T00:23:58.876177814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:58.878188 containerd[1697]: time="2025-11-06T00:23:58.878160798Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Nov 6 00:23:58.880507 containerd[1697]: time="2025-11-06T00:23:58.880470246Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:58.883164 containerd[1697]: time="2025-11-06T00:23:58.883125692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:58.883395 containerd[1697]: time="2025-11-06T00:23:58.883377110Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.313854326s" Nov 6 00:23:58.883453 containerd[1697]: time="2025-11-06T00:23:58.883444718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:23:58.883891 containerd[1697]: time="2025-11-06T00:23:58.883872778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:23:59.481008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393644124.mount: Deactivated successfully. Nov 6 00:23:59.548022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 6 00:23:59.549567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:00.155491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:00.166542 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:24:00.202062 kubelet[2516]: E1106 00:24:00.202040 2516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:24:00.203687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:24:00.203813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:24:00.204125 systemd[1]: kubelet.service: Consumed 124ms CPU time, 108.3M memory peak. Nov 6 00:24:01.319620 containerd[1697]: time="2025-11-06T00:24:01.319574505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:01.321557 containerd[1697]: time="2025-11-06T00:24:01.321525923Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 6 00:24:01.323882 containerd[1697]: time="2025-11-06T00:24:01.323849630Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:01.327367 containerd[1697]: time="2025-11-06T00:24:01.327318705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:01.328277 containerd[1697]: time="2025-11-06T00:24:01.327879105Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.443981251s" Nov 6 00:24:01.328277 containerd[1697]: time="2025-11-06T00:24:01.327908106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:24:01.328499 containerd[1697]: time="2025-11-06T00:24:01.328479687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:24:01.803681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963482487.mount: Deactivated successfully. Nov 6 00:24:01.826781 containerd[1697]: time="2025-11-06T00:24:01.826746638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:01.830684 containerd[1697]: time="2025-11-06T00:24:01.830653216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 6 00:24:01.833970 containerd[1697]: time="2025-11-06T00:24:01.833937034Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:01.838667 containerd[1697]: time="2025-11-06T00:24:01.838630906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:24:01.839117 containerd[1697]: time="2025-11-06T00:24:01.839006770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.50231ms" Nov 6 00:24:01.839117 containerd[1697]: time="2025-11-06T00:24:01.839030938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:24:01.839532 containerd[1697]: time="2025-11-06T00:24:01.839513995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:24:02.538469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608554415.mount: Deactivated successfully. Nov 6 00:24:06.246406 containerd[1697]: time="2025-11-06T00:24:06.246360800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:06.249424 containerd[1697]: time="2025-11-06T00:24:06.249390707Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Nov 6 00:24:06.252302 containerd[1697]: time="2025-11-06T00:24:06.252272364Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:06.255736 containerd[1697]: time="2025-11-06T00:24:06.255693632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:06.256536 containerd[1697]: time="2025-11-06T00:24:06.256420551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.416882728s" Nov 6 00:24:06.256536 containerd[1697]: time="2025-11-06T00:24:06.256449822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:24:08.715231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:08.715633 systemd[1]: kubelet.service: Consumed 124ms CPU time, 108.3M memory peak. Nov 6 00:24:08.717266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:08.738747 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-9.scope)... Nov 6 00:24:08.738846 systemd[1]: Reloading... Nov 6 00:24:08.825370 zram_generator::config[2695]: No configuration found. Nov 6 00:24:09.024736 systemd[1]: Reloading finished in 285 ms. Nov 6 00:24:09.059655 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:24:09.059722 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:24:09.059936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:09.059971 systemd[1]: kubelet.service: Consumed 60ms CPU time, 64.5M memory peak. Nov 6 00:24:09.062599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:09.669222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:09.672629 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:24:09.707165 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:24:09.707165 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:24:09.707165 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:24:09.707461 kubelet[2765]: I1106 00:24:09.707221 2765 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:24:10.297253 kubelet[2765]: I1106 00:24:10.297217 2765 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:24:10.297253 kubelet[2765]: I1106 00:24:10.297240 2765 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:24:10.297486 kubelet[2765]: I1106 00:24:10.297476 2765 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:24:10.328588 kubelet[2765]: I1106 00:24:10.328558 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:24:10.328842 kubelet[2765]: E1106 00:24:10.328823 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:24:10.339878 kubelet[2765]: I1106 00:24:10.339856 2765 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:24:10.342490 kubelet[2765]: I1106 00:24:10.342469 2765 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:24:10.342652 kubelet[2765]: I1106 00:24:10.342626 2765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:24:10.342784 kubelet[2765]: I1106 00:24:10.342651 2765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-1b1a1d3a2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:24:10.342908 kubelet[2765]: I1106 00:24:10.342792 2765 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:24:10.342908 kubelet[2765]: I1106 00:24:10.342801 2765 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:24:10.342908 kubelet[2765]: I1106 00:24:10.342901 2765 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:10.346492 kubelet[2765]: I1106 00:24:10.346269 2765 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:24:10.346492 kubelet[2765]: I1106 00:24:10.346296 2765 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:24:10.346492 kubelet[2765]: I1106 00:24:10.346319 2765 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:24:10.346492 kubelet[2765]: I1106 00:24:10.346335 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:24:10.353898 kubelet[2765]: E1106 00:24:10.353740 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-1b1a1d3a2e&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:24:10.355162 kubelet[2765]: I1106 00:24:10.354822 2765 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:24:10.355277 kubelet[2765]: I1106 00:24:10.355259 2765 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:24:10.356617 kubelet[2765]: W1106 00:24:10.355960 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:24:10.358686 kubelet[2765]: I1106 00:24:10.358672 2765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:24:10.358741 kubelet[2765]: I1106 00:24:10.358712 2765 server.go:1289] "Started kubelet" Nov 6 00:24:10.358776 kubelet[2765]: E1106 00:24:10.358763 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:24:10.361369 kubelet[2765]: I1106 00:24:10.361325 2765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:24:10.362122 kubelet[2765]: I1106 00:24:10.362110 2765 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:24:10.364698 kubelet[2765]: I1106 00:24:10.364656 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:24:10.366540 kubelet[2765]: I1106 00:24:10.366514 2765 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:24:10.366540 kubelet[2765]: I1106 00:24:10.365957 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:24:10.366976 kubelet[2765]: I1106 00:24:10.365900 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:24:10.369369 kubelet[2765]: E1106 00:24:10.367833 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-1b1a1d3a2e.18754323d4cf4f05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-1b1a1d3a2e,UID:ci-4459.1.0-n-1b1a1d3a2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-1b1a1d3a2e,},FirstTimestamp:2025-11-06 00:24:10.358689541 +0000 UTC m=+0.682850071,LastTimestamp:2025-11-06 00:24:10.358689541 +0000 UTC m=+0.682850071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-1b1a1d3a2e,}" Nov 6 00:24:10.370376 kubelet[2765]: I1106 00:24:10.370329 2765 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:24:10.370633 kubelet[2765]: I1106 00:24:10.370617 2765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:24:10.370679 kubelet[2765]: I1106 00:24:10.370668 2765 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:24:10.371394 kubelet[2765]: E1106 00:24:10.371372 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:24:10.371568 kubelet[2765]: I1106 00:24:10.371556 2765 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:24:10.371625 kubelet[2765]: I1106 00:24:10.371613 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:24:10.374552 kubelet[2765]: E1106 00:24:10.374534 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" Nov 6 00:24:10.375956 kubelet[2765]: E1106 00:24:10.374827 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-1b1a1d3a2e?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="200ms" Nov 6 00:24:10.377100 kubelet[2765]: I1106 00:24:10.377083 2765 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:24:10.377774 kubelet[2765]: E1106 00:24:10.377667 2765 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:24:10.382193 kubelet[2765]: I1106 00:24:10.382165 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:24:10.408090 kubelet[2765]: I1106 00:24:10.408078 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:24:10.408090 kubelet[2765]: I1106 00:24:10.408088 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:24:10.408189 kubelet[2765]: I1106 00:24:10.408101 2765 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:10.417038 kubelet[2765]: I1106 00:24:10.416898 2765 policy_none.go:49] "None policy: Start" Nov 6 00:24:10.417038 kubelet[2765]: I1106 00:24:10.416913 2765 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:24:10.417038 kubelet[2765]: I1106 00:24:10.416922 2765 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:24:10.424084 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:24:10.433261 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:24:10.435943 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:24:10.440089 kubelet[2765]: I1106 00:24:10.439850 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:24:10.440089 kubelet[2765]: I1106 00:24:10.439875 2765 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:24:10.440089 kubelet[2765]: I1106 00:24:10.439896 2765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:24:10.440089 kubelet[2765]: I1106 00:24:10.439901 2765 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:24:10.440089 kubelet[2765]: E1106 00:24:10.439933 2765 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:24:10.443023 kubelet[2765]: E1106 00:24:10.443005 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:24:10.443250 kubelet[2765]: E1106 00:24:10.443233 2765 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:24:10.443385 kubelet[2765]: I1106 00:24:10.443375 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:24:10.443422 kubelet[2765]: I1106 00:24:10.443386 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:24:10.443921 kubelet[2765]: I1106 00:24:10.443813 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:24:10.444995 kubelet[2765]: E1106 00:24:10.444920 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:24:10.444995 kubelet[2765]: E1106 00:24:10.444955 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" Nov 6 00:24:10.544731 kubelet[2765]: I1106 00:24:10.544717 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.544990 kubelet[2765]: E1106 00:24:10.544973 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.552866 systemd[1]: Created slice kubepods-burstable-pod91b00f941165e91e36b82542c790e64d.slice - libcontainer container kubepods-burstable-pod91b00f941165e91e36b82542c790e64d.slice. Nov 6 00:24:10.558898 kubelet[2765]: E1106 00:24:10.558878 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.561942 systemd[1]: Created slice kubepods-burstable-poda71ebcc27462fe89f0f9d82b8fb633ac.slice - libcontainer container kubepods-burstable-poda71ebcc27462fe89f0f9d82b8fb633ac.slice. Nov 6 00:24:10.563439 kubelet[2765]: E1106 00:24:10.563415 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572283 kubelet[2765]: I1106 00:24:10.572248 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572354 kubelet[2765]: I1106 00:24:10.572283 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572354 kubelet[2765]: I1106 00:24:10.572301 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572354 kubelet[2765]: I1106 00:24:10.572319 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572354 kubelet[2765]: I1106 00:24:10.572336 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572438 kubelet[2765]: I1106 00:24:10.572367 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572438 kubelet[2765]: I1106 00:24:10.572384 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/217b561751e4f56738156f8b613f6f3c-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"217b561751e4f56738156f8b613f6f3c\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572438 kubelet[2765]: I1106 00:24:10.572399 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.572438 kubelet[2765]: I1106 00:24:10.572417 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.575597 kubelet[2765]: E1106 00:24:10.575576 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-1b1a1d3a2e?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="400ms" Nov 6 00:24:10.577182 systemd[1]: Created slice kubepods-burstable-pod217b561751e4f56738156f8b613f6f3c.slice - libcontainer container kubepods-burstable-pod217b561751e4f56738156f8b613f6f3c.slice. Nov 6 00:24:10.578555 kubelet[2765]: E1106 00:24:10.578539 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.662108 kubelet[2765]: E1106 00:24:10.662013 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-1b1a1d3a2e.18754323d4cf4f05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-1b1a1d3a2e,UID:ci-4459.1.0-n-1b1a1d3a2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-1b1a1d3a2e,},FirstTimestamp:2025-11-06 00:24:10.358689541 +0000 UTC m=+0.682850071,LastTimestamp:2025-11-06 00:24:10.358689541 +0000 UTC m=+0.682850071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-1b1a1d3a2e,}" Nov 6 00:24:10.746786 kubelet[2765]: I1106 00:24:10.746762 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.747192 kubelet[2765]: E1106 00:24:10.747006 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:10.860367 containerd[1697]: time="2025-11-06T00:24:10.860266904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e,Uid:91b00f941165e91e36b82542c790e64d,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:10.864753 containerd[1697]: time="2025-11-06T00:24:10.864723224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e,Uid:a71ebcc27462fe89f0f9d82b8fb633ac,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:10.880442 containerd[1697]: time="2025-11-06T00:24:10.880268502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e,Uid:217b561751e4f56738156f8b613f6f3c,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:10.934054 containerd[1697]: time="2025-11-06T00:24:10.934024242Z" level=info msg="connecting to shim a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54" address="unix:///run/containerd/s/64fbcf8a2e4ccf382f8990f84fd17dd6f12d23af527149ebc6ed7f3bb30762c5" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:10.955265 containerd[1697]: time="2025-11-06T00:24:10.954889315Z" level=info msg="connecting to shim 13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3" address="unix:///run/containerd/s/eab618e04bd0c5ab882d213810cc4b0cbb0d735d28609a6179494d818d0da629" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:10.959247 containerd[1697]: time="2025-11-06T00:24:10.959219388Z" level=info msg="connecting to shim 7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b" address="unix:///run/containerd/s/47e21cd83981b0058c9cab51b6133ad91267d2561a4ced553f12f2ee3c03f700" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:10.975968 kubelet[2765]: E1106 00:24:10.975932 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-1b1a1d3a2e?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="800ms" Nov 6 00:24:10.976521 systemd[1]: Started cri-containerd-a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54.scope - libcontainer container a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54. Nov 6 00:24:10.991470 systemd[1]: Started cri-containerd-13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3.scope - libcontainer container 13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3. Nov 6 00:24:10.992641 systemd[1]: Started cri-containerd-7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b.scope - libcontainer container 7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b. Nov 6 00:24:11.055359 containerd[1697]: time="2025-11-06T00:24:11.055308076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e,Uid:91b00f941165e91e36b82542c790e64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54\"" Nov 6 00:24:11.058361 containerd[1697]: time="2025-11-06T00:24:11.058247208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e,Uid:217b561751e4f56738156f8b613f6f3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b\"" Nov 6 00:24:11.063225 containerd[1697]: time="2025-11-06T00:24:11.062778841Z" level=info msg="CreateContainer within sandbox \"a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:24:11.063988 containerd[1697]: time="2025-11-06T00:24:11.063958515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e,Uid:a71ebcc27462fe89f0f9d82b8fb633ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3\"" Nov 6 00:24:11.077287 containerd[1697]: time="2025-11-06T00:24:11.077269395Z" level=info msg="CreateContainer within sandbox \"7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:24:11.081781 containerd[1697]: time="2025-11-06T00:24:11.081761011Z" level=info msg="CreateContainer within sandbox \"13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:24:11.097078 containerd[1697]: time="2025-11-06T00:24:11.096587554Z" level=info msg="Container 1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:11.102450 containerd[1697]: time="2025-11-06T00:24:11.102425778Z" level=info msg="Container c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:11.123121 containerd[1697]: time="2025-11-06T00:24:11.123062796Z" level=info msg="Container ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:11.130034 containerd[1697]: time="2025-11-06T00:24:11.130013729Z" level=info msg="CreateContainer within sandbox \"a334b84596dd2735d57f54519e794cb865c4a88dfda09c984238f1335eb0ed54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155\"" Nov 6 00:24:11.130842 containerd[1697]: time="2025-11-06T00:24:11.130821391Z" level=info msg="StartContainer for \"1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155\"" Nov 6 00:24:11.132334 containerd[1697]: time="2025-11-06T00:24:11.132312483Z" level=info msg="connecting to shim 1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155" address="unix:///run/containerd/s/64fbcf8a2e4ccf382f8990f84fd17dd6f12d23af527149ebc6ed7f3bb30762c5" protocol=ttrpc version=3 Nov 6 00:24:11.134935 containerd[1697]: time="2025-11-06T00:24:11.134899608Z" level=info msg="CreateContainer within sandbox \"7f860144ad2ab4dd5e667b1e9ce7a495698f42592336c2b6fca42010d8665f8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb\"" Nov 6 00:24:11.135313 containerd[1697]: time="2025-11-06T00:24:11.135294324Z" level=info msg="StartContainer for \"c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb\"" Nov 6 00:24:11.135983 containerd[1697]: time="2025-11-06T00:24:11.135964188Z" level=info msg="connecting to shim c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb" address="unix:///run/containerd/s/47e21cd83981b0058c9cab51b6133ad91267d2561a4ced553f12f2ee3c03f700" protocol=ttrpc version=3 Nov 6 00:24:11.144712 containerd[1697]: time="2025-11-06T00:24:11.144683525Z" level=info msg="CreateContainer within sandbox \"13845fc7a274328566d1b5fd7f47a1eca779bf485b9871d8bcb589bfbfd9ded3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc\"" Nov 6 00:24:11.145136 containerd[1697]: time="2025-11-06T00:24:11.145116520Z" level=info msg="StartContainer for \"ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc\"" Nov 6 00:24:11.146010 containerd[1697]: time="2025-11-06T00:24:11.145982736Z" level=info msg="connecting to shim ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc" address="unix:///run/containerd/s/eab618e04bd0c5ab882d213810cc4b0cbb0d735d28609a6179494d818d0da629" protocol=ttrpc version=3 Nov 6 00:24:11.149403 kubelet[2765]: I1106 00:24:11.149044 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:11.149572 kubelet[2765]: E1106 00:24:11.149551 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:11.152522 systemd[1]: Started cri-containerd-1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155.scope - libcontainer container 1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155. Nov 6 00:24:11.156605 systemd[1]: Started cri-containerd-c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb.scope - libcontainer container c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb. Nov 6 00:24:11.179479 systemd[1]: Started cri-containerd-ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc.scope - libcontainer container ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc. Nov 6 00:24:11.242727 containerd[1697]: time="2025-11-06T00:24:11.242698522Z" level=info msg="StartContainer for \"1cbbf12abbc4b127f149f3f94768861e076c5aa4b9c7a6942860b1d50aded155\" returns successfully" Nov 6 00:24:11.249769 containerd[1697]: time="2025-11-06T00:24:11.249722349Z" level=info msg="StartContainer for \"c361bd65b91f495e1ee1f1b8d0e317b0875bf0bc4c5308fe3764459b2a4b36bb\" returns successfully" Nov 6 00:24:11.260878 containerd[1697]: time="2025-11-06T00:24:11.260431620Z" level=info msg="StartContainer for \"ff4466515a935fb8ffb35377e891d671b5e726220f5462a886b2084c548e1fcc\" returns successfully" Nov 6 00:24:11.453820 kubelet[2765]: E1106 00:24:11.453557 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:11.454778 kubelet[2765]: E1106 00:24:11.454734 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:11.461845 kubelet[2765]: E1106 00:24:11.461652 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:11.952684 kubelet[2765]: I1106 00:24:11.952145 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:12.461509 kubelet[2765]: E1106 00:24:12.461481 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:12.462191 kubelet[2765]: E1106 00:24:12.462173 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:12.866892 kubelet[2765]: E1106 00:24:12.866847 2765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-1b1a1d3a2e\" not found" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.047404 kubelet[2765]: I1106 00:24:13.047371 2765 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.047404 kubelet[2765]: E1106 00:24:13.047407 2765 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-1b1a1d3a2e\": node \"ci-4459.1.0-n-1b1a1d3a2e\" not found" Nov 6 00:24:13.076353 kubelet[2765]: I1106 00:24:13.075727 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.202163 kubelet[2765]: E1106 00:24:13.202086 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.202163 kubelet[2765]: I1106 00:24:13.202111 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.205927 kubelet[2765]: E1106 00:24:13.205863 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.205927 kubelet[2765]: I1106 00:24:13.205883 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.213933 kubelet[2765]: E1106 00:24:13.213203 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.354800 kubelet[2765]: I1106 00:24:13.354777 2765 apiserver.go:52] "Watching apiserver" Nov 6 00:24:13.371236 kubelet[2765]: I1106 00:24:13.371213 2765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:24:13.907261 kubelet[2765]: I1106 00:24:13.907232 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:13.947042 kubelet[2765]: I1106 00:24:13.947009 2765 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:24:15.624295 systemd[1]: Reload requested from client PID 3041 ('systemctl') (unit session-9.scope)... Nov 6 00:24:15.624310 systemd[1]: Reloading... Nov 6 00:24:15.704364 zram_generator::config[3094]: No configuration found. Nov 6 00:24:15.868965 systemd[1]: Reloading finished in 244 ms. Nov 6 00:24:15.897915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:15.916045 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:24:15.916244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:15.916288 systemd[1]: kubelet.service: Consumed 939ms CPU time, 129.9M memory peak. Nov 6 00:24:15.917450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:24:16.394413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:24:16.400593 (kubelet)[3155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:24:16.440362 kubelet[3155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:24:16.440362 kubelet[3155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:24:16.440362 kubelet[3155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:24:16.440362 kubelet[3155]: I1106 00:24:16.440307 3155 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:24:16.446267 kubelet[3155]: I1106 00:24:16.446243 3155 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:24:16.446267 kubelet[3155]: I1106 00:24:16.446259 3155 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:24:16.446435 kubelet[3155]: I1106 00:24:16.446425 3155 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:24:16.447093 kubelet[3155]: I1106 00:24:16.447075 3155 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:24:16.449669 kubelet[3155]: I1106 00:24:16.449628 3155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:24:16.453448 kubelet[3155]: I1106 00:24:16.453429 3155 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:24:16.457356 kubelet[3155]: I1106 00:24:16.456972 3155 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:24:16.457356 kubelet[3155]: I1106 00:24:16.457138 3155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:24:16.457589 kubelet[3155]: I1106 00:24:16.457161 3155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-1b1a1d3a2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:24:16.457721 kubelet[3155]: I1106 00:24:16.457712 3155 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:24:16.457759 kubelet[3155]: I1106 00:24:16.457755 3155 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:24:16.457829 kubelet[3155]: I1106 00:24:16.457823 3155 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:16.457971 kubelet[3155]: I1106 00:24:16.457962 3155 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:24:16.458700 kubelet[3155]: I1106 00:24:16.458688 3155 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:24:16.458784 kubelet[3155]: I1106 00:24:16.458779 3155 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:24:16.458828 kubelet[3155]: I1106 00:24:16.458824 3155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:24:16.461427 kubelet[3155]: I1106 00:24:16.461409 3155 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:24:16.461889 kubelet[3155]: I1106 00:24:16.461876 3155 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:24:16.466361 kubelet[3155]: I1106 00:24:16.464592 3155 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:24:16.466361 kubelet[3155]: I1106 00:24:16.464625 3155 server.go:1289] "Started kubelet" Nov 6 00:24:16.467964 kubelet[3155]: I1106 00:24:16.467909 3155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:24:16.475674 kubelet[3155]: E1106 00:24:16.475661 3155 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:24:16.476484 kubelet[3155]: I1106 00:24:16.476463 3155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:24:16.477107 kubelet[3155]: I1106 00:24:16.477083 3155 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:24:16.477253 kubelet[3155]: I1106 00:24:16.477242 3155 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:24:16.479421 kubelet[3155]: I1106 00:24:16.479403 3155 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:24:16.479494 kubelet[3155]: I1106 00:24:16.479486 3155 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:24:16.481063 kubelet[3155]: I1106 00:24:16.480913 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481473 3155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481620 3155 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481778 3155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481790 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481803 3155 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481816 3155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:24:16.483374 kubelet[3155]: I1106 00:24:16.481824 3155 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:24:16.483374 kubelet[3155]: E1106 00:24:16.481851 3155 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:24:16.490036 kubelet[3155]: I1106 00:24:16.489969 3155 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:24:16.490122 kubelet[3155]: I1106 00:24:16.490097 3155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:24:16.492871 kubelet[3155]: I1106 00:24:16.492847 3155 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:24:16.532362 kubelet[3155]: I1106 00:24:16.532331 3155 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:24:16.532362 kubelet[3155]: I1106 00:24:16.532359 3155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:24:16.532444 kubelet[3155]: I1106 00:24:16.532372 3155 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:24:16.532472 kubelet[3155]: I1106 00:24:16.532462 3155 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:24:16.532493 kubelet[3155]: I1106 00:24:16.532473 3155 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:24:16.532493 kubelet[3155]: I1106 00:24:16.532489 3155 policy_none.go:49] "None policy: Start" Nov 6 00:24:16.532536 kubelet[3155]: I1106 00:24:16.532497 3155 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:24:16.532536 kubelet[3155]: I1106 00:24:16.532504 3155 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:24:16.532579 kubelet[3155]: I1106 00:24:16.532571 3155 state_mem.go:75] "Updated machine memory state" Nov 6 00:24:16.534993 kubelet[3155]: E1106 00:24:16.534982 3155 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:24:16.535504 kubelet[3155]: I1106 00:24:16.535492 3155 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:24:16.535691 kubelet[3155]: I1106 00:24:16.535672 3155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:24:16.535875 kubelet[3155]: I1106 00:24:16.535868 3155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:24:16.537711 kubelet[3155]: E1106 00:24:16.537696 3155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:24:16.582623 kubelet[3155]: I1106 00:24:16.582605 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.582893 kubelet[3155]: I1106 00:24:16.582879 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.583072 kubelet[3155]: I1106 00:24:16.583060 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.589309 kubelet[3155]: I1106 00:24:16.589249 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:24:16.593228 kubelet[3155]: I1106 00:24:16.593208 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:24:16.593423 kubelet[3155]: E1106 00:24:16.593372 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" already exists" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.593687 kubelet[3155]: I1106 00:24:16.593662 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:24:16.641894 kubelet[3155]: I1106 00:24:16.641719 3155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.669338 kubelet[3155]: I1106 00:24:16.668562 3155 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.669338 kubelet[3155]: I1106 00:24:16.668605 3155 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680511 kubelet[3155]: I1106 00:24:16.680492 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680587 kubelet[3155]: I1106 00:24:16.680515 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680587 kubelet[3155]: I1106 00:24:16.680532 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680587 kubelet[3155]: I1106 00:24:16.680549 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/217b561751e4f56738156f8b613f6f3c-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"217b561751e4f56738156f8b613f6f3c\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680587 kubelet[3155]: I1106 00:24:16.680563 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680671 kubelet[3155]: I1106 00:24:16.680594 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91b00f941165e91e36b82542c790e64d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"91b00f941165e91e36b82542c790e64d\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680671 kubelet[3155]: I1106 00:24:16.680626 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680671 kubelet[3155]: I1106 00:24:16.680642 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:16.680671 kubelet[3155]: I1106 00:24:16.680657 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71ebcc27462fe89f0f9d82b8fb633ac-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e\" (UID: \"a71ebcc27462fe89f0f9d82b8fb633ac\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:17.460774 kubelet[3155]: I1106 00:24:17.460045 3155 apiserver.go:52] "Watching apiserver" Nov 6 00:24:17.480145 kubelet[3155]: I1106 00:24:17.480115 3155 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:24:17.521916 kubelet[3155]: I1106 00:24:17.521894 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:17.537514 kubelet[3155]: I1106 00:24:17.537232 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:24:17.537514 kubelet[3155]: E1106 00:24:17.537407 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:24:17.553290 kubelet[3155]: I1106 00:24:17.553250 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-1b1a1d3a2e" podStartSLOduration=1.5532364749999998 podStartE2EDuration="1.553236475s" podCreationTimestamp="2025-11-06 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:17.537418352 +0000 UTC m=+1.132932850" watchObservedRunningTime="2025-11-06 00:24:17.553236475 +0000 UTC m=+1.148750970" Nov 6 00:24:17.566227 kubelet[3155]: I1106 00:24:17.566057 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-1b1a1d3a2e" podStartSLOduration=1.5660195890000002 podStartE2EDuration="1.566019589s" podCreationTimestamp="2025-11-06 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:17.553963042 +0000 UTC m=+1.149477564" watchObservedRunningTime="2025-11-06 00:24:17.566019589 +0000 UTC m=+1.161534086" Nov 6 00:24:17.566227 kubelet[3155]: I1106 00:24:17.566155 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-1b1a1d3a2e" podStartSLOduration=4.566149879 podStartE2EDuration="4.566149879s" podCreationTimestamp="2025-11-06 00:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:17.566104488 +0000 UTC m=+1.161618989" watchObservedRunningTime="2025-11-06 00:24:17.566149879 +0000 UTC m=+1.161664382" Nov 6 00:24:22.086758 kubelet[3155]: I1106 00:24:22.086725 3155 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:24:22.087311 containerd[1697]: time="2025-11-06T00:24:22.087055948Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:24:22.087639 kubelet[3155]: I1106 00:24:22.087361 3155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:24:22.524294 systemd[1]: Created slice kubepods-besteffort-podf84fceba_1327_4b63_b1a0_f35dc87348bb.slice - libcontainer container kubepods-besteffort-podf84fceba_1327_4b63_b1a0_f35dc87348bb.slice. Nov 6 00:24:22.620898 kubelet[3155]: I1106 00:24:22.620865 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84fceba-1327-4b63-b1a0-f35dc87348bb-xtables-lock\") pod \"kube-proxy-gzttc\" (UID: \"f84fceba-1327-4b63-b1a0-f35dc87348bb\") " pod="kube-system/kube-proxy-gzttc" Nov 6 00:24:22.620898 kubelet[3155]: I1106 00:24:22.620893 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfqq\" (UniqueName: \"kubernetes.io/projected/f84fceba-1327-4b63-b1a0-f35dc87348bb-kube-api-access-kpfqq\") pod \"kube-proxy-gzttc\" (UID: \"f84fceba-1327-4b63-b1a0-f35dc87348bb\") " pod="kube-system/kube-proxy-gzttc" Nov 6 00:24:22.621027 kubelet[3155]: I1106 00:24:22.620911 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f84fceba-1327-4b63-b1a0-f35dc87348bb-kube-proxy\") pod \"kube-proxy-gzttc\" (UID: \"f84fceba-1327-4b63-b1a0-f35dc87348bb\") " pod="kube-system/kube-proxy-gzttc" Nov 6 00:24:22.621027 kubelet[3155]: I1106 00:24:22.620924 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84fceba-1327-4b63-b1a0-f35dc87348bb-lib-modules\") pod \"kube-proxy-gzttc\" (UID: \"f84fceba-1327-4b63-b1a0-f35dc87348bb\") " pod="kube-system/kube-proxy-gzttc" Nov 6 00:24:22.830937 containerd[1697]: time="2025-11-06T00:24:22.830893235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzttc,Uid:f84fceba-1327-4b63-b1a0-f35dc87348bb,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:22.875117 containerd[1697]: time="2025-11-06T00:24:22.875067129Z" level=info msg="connecting to shim 61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42" address="unix:///run/containerd/s/d0e756aecdf392b37cb68588b3b8febc6786ccd82e0386ab51b3e991edc79f8c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:22.905616 systemd[1]: Started cri-containerd-61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42.scope - libcontainer container 61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42. Nov 6 00:24:22.925185 containerd[1697]: time="2025-11-06T00:24:22.925164749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzttc,Uid:f84fceba-1327-4b63-b1a0-f35dc87348bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42\"" Nov 6 00:24:22.931459 containerd[1697]: time="2025-11-06T00:24:22.931441324Z" level=info msg="CreateContainer within sandbox \"61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:24:22.949590 containerd[1697]: time="2025-11-06T00:24:22.949561745Z" level=info msg="Container 7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:22.952623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619603438.mount: Deactivated successfully. Nov 6 00:24:22.970396 containerd[1697]: time="2025-11-06T00:24:22.970374803Z" level=info msg="CreateContainer within sandbox \"61ed8fd9c92d67a99b147d8f7ac36ee0a1a1aacb8e1d8a33d76dacd1753fae42\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6\"" Nov 6 00:24:22.971955 containerd[1697]: time="2025-11-06T00:24:22.970765332Z" level=info msg="StartContainer for \"7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6\"" Nov 6 00:24:22.971955 containerd[1697]: time="2025-11-06T00:24:22.971889688Z" level=info msg="connecting to shim 7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6" address="unix:///run/containerd/s/d0e756aecdf392b37cb68588b3b8febc6786ccd82e0386ab51b3e991edc79f8c" protocol=ttrpc version=3 Nov 6 00:24:22.987461 systemd[1]: Started cri-containerd-7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6.scope - libcontainer container 7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6. Nov 6 00:24:23.016257 containerd[1697]: time="2025-11-06T00:24:23.016205079Z" level=info msg="StartContainer for \"7684de63c328995108fcc3c8301a590c32a0d911d7b16717da6f67d09e7bfba6\" returns successfully" Nov 6 00:24:23.335127 systemd[1]: Created slice kubepods-besteffort-pod2335233e_6a09_49f6_ae5a_250c8b70e17a.slice - libcontainer container kubepods-besteffort-pod2335233e_6a09_49f6_ae5a_250c8b70e17a.slice. Nov 6 00:24:23.426709 kubelet[3155]: I1106 00:24:23.426677 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6h7\" (UniqueName: \"kubernetes.io/projected/2335233e-6a09-49f6-ae5a-250c8b70e17a-kube-api-access-xn6h7\") pod \"tigera-operator-7dcd859c48-h8ht6\" (UID: \"2335233e-6a09-49f6-ae5a-250c8b70e17a\") " pod="tigera-operator/tigera-operator-7dcd859c48-h8ht6" Nov 6 00:24:23.427003 kubelet[3155]: I1106 00:24:23.426712 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2335233e-6a09-49f6-ae5a-250c8b70e17a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-h8ht6\" (UID: \"2335233e-6a09-49f6-ae5a-250c8b70e17a\") " pod="tigera-operator/tigera-operator-7dcd859c48-h8ht6" Nov 6 00:24:23.637496 containerd[1697]: time="2025-11-06T00:24:23.637416679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h8ht6,Uid:2335233e-6a09-49f6-ae5a-250c8b70e17a,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:24:23.684073 containerd[1697]: time="2025-11-06T00:24:23.684043191Z" level=info msg="connecting to shim fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843" address="unix:///run/containerd/s/4be22aec735f5d6509d8d802065d54981a6e78682b23084a544a9dc6bb67c886" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:23.705495 systemd[1]: Started cri-containerd-fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843.scope - libcontainer container fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843. Nov 6 00:24:23.752197 containerd[1697]: time="2025-11-06T00:24:23.752176879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h8ht6,Uid:2335233e-6a09-49f6-ae5a-250c8b70e17a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843\"" Nov 6 00:24:23.753305 containerd[1697]: time="2025-11-06T00:24:23.753251602Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:24:25.145332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369589881.mount: Deactivated successfully. Nov 6 00:24:26.548529 kubelet[3155]: I1106 00:24:26.548314 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzttc" podStartSLOduration=4.548285262 podStartE2EDuration="4.548285262s" podCreationTimestamp="2025-11-06 00:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:23.555242573 +0000 UTC m=+7.150757088" watchObservedRunningTime="2025-11-06 00:24:26.548285262 +0000 UTC m=+10.143799774" Nov 6 00:24:29.698921 containerd[1697]: time="2025-11-06T00:24:29.698880893Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:29.701149 containerd[1697]: time="2025-11-06T00:24:29.701117237Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:24:29.703574 containerd[1697]: time="2025-11-06T00:24:29.703520751Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:29.707196 containerd[1697]: time="2025-11-06T00:24:29.706798998Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:29.707196 containerd[1697]: time="2025-11-06T00:24:29.707112714Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.953831347s" Nov 6 00:24:29.707196 containerd[1697]: time="2025-11-06T00:24:29.707137163Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:24:29.713152 containerd[1697]: time="2025-11-06T00:24:29.713126081Z" level=info msg="CreateContainer within sandbox \"fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:24:29.729359 containerd[1697]: time="2025-11-06T00:24:29.729074187Z" level=info msg="Container d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:29.740166 containerd[1697]: time="2025-11-06T00:24:29.740142060Z" level=info msg="CreateContainer within sandbox \"fe7dcfaa2cf8d8b3ae9b95ca5d8e09d23dcd2720098233c474fe0c4bcce09843\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223\"" Nov 6 00:24:29.740921 containerd[1697]: time="2025-11-06T00:24:29.740490020Z" level=info msg="StartContainer for \"d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223\"" Nov 6 00:24:29.741115 containerd[1697]: time="2025-11-06T00:24:29.741089212Z" level=info msg="connecting to shim d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223" address="unix:///run/containerd/s/4be22aec735f5d6509d8d802065d54981a6e78682b23084a544a9dc6bb67c886" protocol=ttrpc version=3 Nov 6 00:24:29.769483 systemd[1]: Started cri-containerd-d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223.scope - libcontainer container d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223. Nov 6 00:24:29.836536 containerd[1697]: time="2025-11-06T00:24:29.836508343Z" level=info msg="StartContainer for \"d3ffd9050eb2f61e687a5dd21f61109f3d96b7529c6f671e6c1f0f260ca88223\" returns successfully" Nov 6 00:24:37.215335 sudo[2161]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:37.322715 sshd[2160]: Connection closed by 10.200.16.10 port 54072 Nov 6 00:24:37.322997 sshd-session[2157]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:37.326175 systemd[1]: sshd@6-10.200.8.43:22-10.200.16.10:54072.service: Deactivated successfully. Nov 6 00:24:37.327852 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:24:37.328018 systemd[1]: session-9.scope: Consumed 3.398s CPU time, 231.4M memory peak. Nov 6 00:24:37.329193 systemd-logind[1681]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:24:37.330616 systemd-logind[1681]: Removed session 9. Nov 6 00:24:44.953751 kubelet[3155]: I1106 00:24:44.953683 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-h8ht6" podStartSLOduration=15.998911151 podStartE2EDuration="21.953667864s" podCreationTimestamp="2025-11-06 00:24:23 +0000 UTC" firstStartedPulling="2025-11-06 00:24:23.752998068 +0000 UTC m=+7.348512558" lastFinishedPulling="2025-11-06 00:24:29.70775478 +0000 UTC m=+13.303269271" observedRunningTime="2025-11-06 00:24:30.558004447 +0000 UTC m=+14.153518951" watchObservedRunningTime="2025-11-06 00:24:44.953667864 +0000 UTC m=+28.549182363" Nov 6 00:24:44.967738 systemd[1]: Created slice kubepods-besteffort-pod04a3e2af_a4ca_45be_a16e_905cb0db4fe3.slice - libcontainer container kubepods-besteffort-pod04a3e2af_a4ca_45be_a16e_905cb0db4fe3.slice. Nov 6 00:24:45.065186 kubelet[3155]: I1106 00:24:45.065143 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04a3e2af-a4ca-45be-a16e-905cb0db4fe3-tigera-ca-bundle\") pod \"calico-typha-5b74db6dc9-tzqmz\" (UID: \"04a3e2af-a4ca-45be-a16e-905cb0db4fe3\") " pod="calico-system/calico-typha-5b74db6dc9-tzqmz" Nov 6 00:24:45.065186 kubelet[3155]: I1106 00:24:45.065183 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8hb\" (UniqueName: \"kubernetes.io/projected/04a3e2af-a4ca-45be-a16e-905cb0db4fe3-kube-api-access-nj8hb\") pod \"calico-typha-5b74db6dc9-tzqmz\" (UID: \"04a3e2af-a4ca-45be-a16e-905cb0db4fe3\") " pod="calico-system/calico-typha-5b74db6dc9-tzqmz" Nov 6 00:24:45.065304 kubelet[3155]: I1106 00:24:45.065199 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04a3e2af-a4ca-45be-a16e-905cb0db4fe3-typha-certs\") pod \"calico-typha-5b74db6dc9-tzqmz\" (UID: \"04a3e2af-a4ca-45be-a16e-905cb0db4fe3\") " pod="calico-system/calico-typha-5b74db6dc9-tzqmz" Nov 6 00:24:45.272822 containerd[1697]: time="2025-11-06T00:24:45.272745021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b74db6dc9-tzqmz,Uid:04a3e2af-a4ca-45be-a16e-905cb0db4fe3,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:45.320680 containerd[1697]: time="2025-11-06T00:24:45.320643477Z" level=info msg="connecting to shim 9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e" address="unix:///run/containerd/s/d2cd4d9dfa948a5eb77abe4186de02b58f3d75c70aaaa6afbe8a350ec315392f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:45.345632 systemd[1]: Started cri-containerd-9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e.scope - libcontainer container 9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e. Nov 6 00:24:45.387239 containerd[1697]: time="2025-11-06T00:24:45.387183633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b74db6dc9-tzqmz,Uid:04a3e2af-a4ca-45be-a16e-905cb0db4fe3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e\"" Nov 6 00:24:45.389387 containerd[1697]: time="2025-11-06T00:24:45.389362055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:24:45.401074 systemd[1]: Created slice kubepods-besteffort-pod21b4f7e8_5b58_4e04_bb84_1e16f58f314a.slice - libcontainer container kubepods-besteffort-pod21b4f7e8_5b58_4e04_bb84_1e16f58f314a.slice. Nov 6 00:24:45.467564 kubelet[3155]: I1106 00:24:45.467541 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-xtables-lock\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467794 kubelet[3155]: I1106 00:24:45.467572 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6ls\" (UniqueName: \"kubernetes.io/projected/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-kube-api-access-2w6ls\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467794 kubelet[3155]: I1106 00:24:45.467590 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-tigera-ca-bundle\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467794 kubelet[3155]: I1106 00:24:45.467622 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-cni-net-dir\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467794 kubelet[3155]: I1106 00:24:45.467635 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-policysync\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467794 kubelet[3155]: I1106 00:24:45.467658 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-flexvol-driver-host\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467897 kubelet[3155]: I1106 00:24:45.467676 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-node-certs\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467897 kubelet[3155]: I1106 00:24:45.467699 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-var-lib-calico\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467897 kubelet[3155]: I1106 00:24:45.467729 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-cni-bin-dir\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467897 kubelet[3155]: I1106 00:24:45.467751 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-cni-log-dir\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467897 kubelet[3155]: I1106 00:24:45.467765 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-var-run-calico\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.467970 kubelet[3155]: I1106 00:24:45.467796 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21b4f7e8-5b58-4e04-bb84-1e16f58f314a-lib-modules\") pod \"calico-node-gbft2\" (UID: \"21b4f7e8-5b58-4e04-bb84-1e16f58f314a\") " pod="calico-system/calico-node-gbft2" Nov 6 00:24:45.570859 kubelet[3155]: E1106 00:24:45.570818 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.570859 kubelet[3155]: W1106 00:24:45.570833 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.572876 kubelet[3155]: E1106 00:24:45.572787 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.573312 kubelet[3155]: E1106 00:24:45.573288 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.573420 kubelet[3155]: W1106 00:24:45.573364 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.573420 kubelet[3155]: E1106 00:24:45.573378 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.575372 kubelet[3155]: E1106 00:24:45.573906 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.575372 kubelet[3155]: W1106 00:24:45.573944 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.575372 kubelet[3155]: E1106 00:24:45.573955 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.577567 kubelet[3155]: E1106 00:24:45.577009 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.578587 kubelet[3155]: W1106 00:24:45.577668 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.578587 kubelet[3155]: E1106 00:24:45.578398 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.578737 kubelet[3155]: E1106 00:24:45.578705 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.578737 kubelet[3155]: W1106 00:24:45.578729 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.578802 kubelet[3155]: E1106 00:24:45.578740 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.578976 kubelet[3155]: E1106 00:24:45.578901 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.578976 kubelet[3155]: W1106 00:24:45.578928 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.578976 kubelet[3155]: E1106 00:24:45.578938 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.579297 kubelet[3155]: E1106 00:24:45.579260 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.579297 kubelet[3155]: W1106 00:24:45.579271 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.579438 kubelet[3155]: E1106 00:24:45.579282 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.579647 kubelet[3155]: E1106 00:24:45.579634 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.579724 kubelet[3155]: W1106 00:24:45.579685 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.579724 kubelet[3155]: E1106 00:24:45.579696 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.579966 kubelet[3155]: E1106 00:24:45.579958 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.580034 kubelet[3155]: W1106 00:24:45.579990 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.580034 kubelet[3155]: E1106 00:24:45.580000 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.580460 kubelet[3155]: E1106 00:24:45.580448 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.580615 kubelet[3155]: W1106 00:24:45.580561 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.580615 kubelet[3155]: E1106 00:24:45.580582 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.580871 kubelet[3155]: E1106 00:24:45.580860 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.580871 kubelet[3155]: W1106 00:24:45.580868 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.580929 kubelet[3155]: E1106 00:24:45.580878 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.581045 kubelet[3155]: E1106 00:24:45.581021 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.581045 kubelet[3155]: W1106 00:24:45.581041 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.581106 kubelet[3155]: E1106 00:24:45.581047 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.581168 kubelet[3155]: E1106 00:24:45.581147 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.581168 kubelet[3155]: W1106 00:24:45.581165 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.581214 kubelet[3155]: E1106 00:24:45.581171 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.581333 kubelet[3155]: E1106 00:24:45.581323 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.581333 kubelet[3155]: W1106 00:24:45.581331 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.581439 kubelet[3155]: E1106 00:24:45.581337 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.737678 kubelet[3155]: E1106 00:24:45.737639 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:45.743051 kubelet[3155]: E1106 00:24:45.743029 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.743051 kubelet[3155]: W1106 00:24:45.743048 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.743172 kubelet[3155]: E1106 00:24:45.743062 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.753852 kubelet[3155]: E1106 00:24:45.753832 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.753852 kubelet[3155]: W1106 00:24:45.753850 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.753962 kubelet[3155]: E1106 00:24:45.753865 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.754074 kubelet[3155]: E1106 00:24:45.754067 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.754104 kubelet[3155]: W1106 00:24:45.754075 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.754104 kubelet[3155]: E1106 00:24:45.754091 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.755843 kubelet[3155]: E1106 00:24:45.755736 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.756149 kubelet[3155]: W1106 00:24:45.755927 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.756149 kubelet[3155]: E1106 00:24:45.755947 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.756618 kubelet[3155]: E1106 00:24:45.756483 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.756979 kubelet[3155]: W1106 00:24:45.756755 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.756979 kubelet[3155]: E1106 00:24:45.756833 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.757934 kubelet[3155]: E1106 00:24:45.757912 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.757934 kubelet[3155]: W1106 00:24:45.757933 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758030 kubelet[3155]: E1106 00:24:45.757946 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758134 kubelet[3155]: E1106 00:24:45.758057 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758134 kubelet[3155]: W1106 00:24:45.758064 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758134 kubelet[3155]: E1106 00:24:45.758070 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758205 kubelet[3155]: E1106 00:24:45.758147 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758205 kubelet[3155]: W1106 00:24:45.758152 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758205 kubelet[3155]: E1106 00:24:45.758158 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758261 kubelet[3155]: E1106 00:24:45.758234 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758261 kubelet[3155]: W1106 00:24:45.758238 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758261 kubelet[3155]: E1106 00:24:45.758244 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758479 kubelet[3155]: E1106 00:24:45.758324 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758479 kubelet[3155]: W1106 00:24:45.758329 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758479 kubelet[3155]: E1106 00:24:45.758337 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758479 kubelet[3155]: E1106 00:24:45.758414 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758479 kubelet[3155]: W1106 00:24:45.758419 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758479 kubelet[3155]: E1106 00:24:45.758426 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758603 kubelet[3155]: E1106 00:24:45.758493 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758603 kubelet[3155]: W1106 00:24:45.758497 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758603 kubelet[3155]: E1106 00:24:45.758505 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758603 kubelet[3155]: E1106 00:24:45.758570 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758603 kubelet[3155]: W1106 00:24:45.758577 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758603 kubelet[3155]: E1106 00:24:45.758583 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758716 kubelet[3155]: E1106 00:24:45.758657 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758716 kubelet[3155]: W1106 00:24:45.758661 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758716 kubelet[3155]: E1106 00:24:45.758667 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758774 kubelet[3155]: E1106 00:24:45.758731 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758774 kubelet[3155]: W1106 00:24:45.758735 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758774 kubelet[3155]: E1106 00:24:45.758741 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758833 kubelet[3155]: E1106 00:24:45.758803 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758833 kubelet[3155]: W1106 00:24:45.758807 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758833 kubelet[3155]: E1106 00:24:45.758812 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.758891 kubelet[3155]: E1106 00:24:45.758883 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.758891 kubelet[3155]: W1106 00:24:45.758886 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.758928 kubelet[3155]: E1106 00:24:45.758894 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.758970 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.759413 kubelet[3155]: W1106 00:24:45.758975 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.758979 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.759041 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.759413 kubelet[3155]: W1106 00:24:45.759045 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.759050 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.759110 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.759413 kubelet[3155]: W1106 00:24:45.759114 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.759119 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.759413 kubelet[3155]: E1106 00:24:45.759178 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.759617 kubelet[3155]: W1106 00:24:45.759182 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.759617 kubelet[3155]: E1106 00:24:45.759186 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.769156 kubelet[3155]: E1106 00:24:45.769139 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.769156 kubelet[3155]: W1106 00:24:45.769153 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.769274 kubelet[3155]: E1106 00:24:45.769163 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.769274 kubelet[3155]: I1106 00:24:45.769183 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9d7d62c-06ef-4717-9bd7-eae5448191dc-socket-dir\") pod \"csi-node-driver-wmdqb\" (UID: \"a9d7d62c-06ef-4717-9bd7-eae5448191dc\") " pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:45.769369 kubelet[3155]: E1106 00:24:45.769356 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.769369 kubelet[3155]: W1106 00:24:45.769366 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.769441 kubelet[3155]: E1106 00:24:45.769374 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.769441 kubelet[3155]: I1106 00:24:45.769390 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9d7d62c-06ef-4717-9bd7-eae5448191dc-registration-dir\") pod \"csi-node-driver-wmdqb\" (UID: \"a9d7d62c-06ef-4717-9bd7-eae5448191dc\") " pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:45.769529 kubelet[3155]: E1106 00:24:45.769517 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.769529 kubelet[3155]: W1106 00:24:45.769526 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.769575 kubelet[3155]: E1106 00:24:45.769533 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.769575 kubelet[3155]: I1106 00:24:45.769547 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a9d7d62c-06ef-4717-9bd7-eae5448191dc-varrun\") pod \"csi-node-driver-wmdqb\" (UID: \"a9d7d62c-06ef-4717-9bd7-eae5448191dc\") " pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:45.769778 kubelet[3155]: E1106 00:24:45.769767 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.769881 kubelet[3155]: W1106 00:24:45.769778 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.769881 kubelet[3155]: E1106 00:24:45.769787 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.769881 kubelet[3155]: I1106 00:24:45.769816 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwngv\" (UniqueName: \"kubernetes.io/projected/a9d7d62c-06ef-4717-9bd7-eae5448191dc-kube-api-access-mwngv\") pod \"csi-node-driver-wmdqb\" (UID: \"a9d7d62c-06ef-4717-9bd7-eae5448191dc\") " pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:45.770110 kubelet[3155]: E1106 00:24:45.770102 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.770168 kubelet[3155]: W1106 00:24:45.770160 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.770217 kubelet[3155]: E1106 00:24:45.770207 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.770548 kubelet[3155]: E1106 00:24:45.770534 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.770548 kubelet[3155]: W1106 00:24:45.770545 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.770624 kubelet[3155]: E1106 00:24:45.770555 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.770698 kubelet[3155]: E1106 00:24:45.770692 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.770732 kubelet[3155]: W1106 00:24:45.770722 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.770765 kubelet[3155]: E1106 00:24:45.770731 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.770827 kubelet[3155]: E1106 00:24:45.770819 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.770827 kubelet[3155]: W1106 00:24:45.770825 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771022 kubelet[3155]: E1106 00:24:45.770831 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771022 kubelet[3155]: E1106 00:24:45.770938 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771022 kubelet[3155]: W1106 00:24:45.770944 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771022 kubelet[3155]: E1106 00:24:45.770949 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771111 kubelet[3155]: E1106 00:24:45.771101 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771111 kubelet[3155]: W1106 00:24:45.771108 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771191 kubelet[3155]: E1106 00:24:45.771114 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771191 kubelet[3155]: I1106 00:24:45.771163 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9d7d62c-06ef-4717-9bd7-eae5448191dc-kubelet-dir\") pod \"csi-node-driver-wmdqb\" (UID: \"a9d7d62c-06ef-4717-9bd7-eae5448191dc\") " pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:45.771254 kubelet[3155]: E1106 00:24:45.771204 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771254 kubelet[3155]: W1106 00:24:45.771208 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771254 kubelet[3155]: E1106 00:24:45.771214 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771358 kubelet[3155]: E1106 00:24:45.771285 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771358 kubelet[3155]: W1106 00:24:45.771290 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771358 kubelet[3155]: E1106 00:24:45.771295 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771468 kubelet[3155]: E1106 00:24:45.771460 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771565 kubelet[3155]: W1106 00:24:45.771508 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771565 kubelet[3155]: E1106 00:24:45.771517 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771683 kubelet[3155]: E1106 00:24:45.771673 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771711 kubelet[3155]: W1106 00:24:45.771690 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771711 kubelet[3155]: E1106 00:24:45.771697 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.771816 kubelet[3155]: E1106 00:24:45.771807 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.771816 kubelet[3155]: W1106 00:24:45.771814 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.771852 kubelet[3155]: E1106 00:24:45.771820 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.872052 kubelet[3155]: E1106 00:24:45.871989 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.872052 kubelet[3155]: W1106 00:24:45.872003 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.872052 kubelet[3155]: E1106 00:24:45.872015 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.872488 kubelet[3155]: E1106 00:24:45.872175 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.872488 kubelet[3155]: W1106 00:24:45.872207 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.872488 kubelet[3155]: E1106 00:24:45.872215 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.872575 kubelet[3155]: E1106 00:24:45.872516 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.872575 kubelet[3155]: W1106 00:24:45.872523 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.872575 kubelet[3155]: E1106 00:24:45.872533 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.872777 kubelet[3155]: E1106 00:24:45.872767 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.872809 kubelet[3155]: W1106 00:24:45.872777 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.872809 kubelet[3155]: E1106 00:24:45.872784 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.873071 kubelet[3155]: E1106 00:24:45.873013 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.873071 kubelet[3155]: W1106 00:24:45.873023 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.873071 kubelet[3155]: E1106 00:24:45.873030 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.873159 kubelet[3155]: E1106 00:24:45.873153 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.873186 kubelet[3155]: W1106 00:24:45.873159 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.873186 kubelet[3155]: E1106 00:24:45.873166 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873295 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874380 kubelet[3155]: W1106 00:24:45.873324 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873331 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873631 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874380 kubelet[3155]: W1106 00:24:45.873653 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873660 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873818 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874380 kubelet[3155]: W1106 00:24:45.873822 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873828 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874380 kubelet[3155]: E1106 00:24:45.873976 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874636 kubelet[3155]: W1106 00:24:45.873980 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.873986 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.874129 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874636 kubelet[3155]: W1106 00:24:45.874134 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.874140 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.874303 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874636 kubelet[3155]: W1106 00:24:45.874307 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.874313 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874636 kubelet[3155]: E1106 00:24:45.874475 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874636 kubelet[3155]: W1106 00:24:45.874480 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874488 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874601 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874876 kubelet[3155]: W1106 00:24:45.874606 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874612 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874745 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874876 kubelet[3155]: W1106 00:24:45.874750 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874756 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874844 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.874876 kubelet[3155]: W1106 00:24:45.874848 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.874876 kubelet[3155]: E1106 00:24:45.874854 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.875104 kubelet[3155]: E1106 00:24:45.874953 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.875104 kubelet[3155]: W1106 00:24:45.874959 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.875104 kubelet[3155]: E1106 00:24:45.874965 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.875104 kubelet[3155]: E1106 00:24:45.875082 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.875104 kubelet[3155]: W1106 00:24:45.875086 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.875104 kubelet[3155]: E1106 00:24:45.875092 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.875245 kubelet[3155]: E1106 00:24:45.875195 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.875245 kubelet[3155]: W1106 00:24:45.875200 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.875245 kubelet[3155]: E1106 00:24:45.875205 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.875317 kubelet[3155]: E1106 00:24:45.875283 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.875317 kubelet[3155]: W1106 00:24:45.875287 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.875317 kubelet[3155]: E1106 00:24:45.875292 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875400 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.876448 kubelet[3155]: W1106 00:24:45.875404 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875410 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875616 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.876448 kubelet[3155]: W1106 00:24:45.875621 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875628 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875745 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.876448 kubelet[3155]: W1106 00:24:45.875749 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875754 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.876448 kubelet[3155]: E1106 00:24:45.875868 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.876655 kubelet[3155]: W1106 00:24:45.875872 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.876655 kubelet[3155]: E1106 00:24:45.875878 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.876655 kubelet[3155]: E1106 00:24:45.876020 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.876655 kubelet[3155]: W1106 00:24:45.876024 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.876655 kubelet[3155]: E1106 00:24:45.876029 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:45.901825 kubelet[3155]: E1106 00:24:45.901801 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:45.901825 kubelet[3155]: W1106 00:24:45.901817 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:45.902462 kubelet[3155]: E1106 00:24:45.901829 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:46.003906 containerd[1697]: time="2025-11-06T00:24:46.003878904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbft2,Uid:21b4f7e8-5b58-4e04-bb84-1e16f58f314a,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:46.054603 containerd[1697]: time="2025-11-06T00:24:46.054567769Z" level=info msg="connecting to shim a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d" address="unix:///run/containerd/s/b77a864de214baca77aa79960b0e487d2a966213fcace2c380e0d73bcab26b83" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:46.070567 systemd[1]: Started cri-containerd-a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d.scope - libcontainer container a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d. Nov 6 00:24:46.100817 containerd[1697]: time="2025-11-06T00:24:46.100708108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbft2,Uid:21b4f7e8-5b58-4e04-bb84-1e16f58f314a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\"" Nov 6 00:24:47.287669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589895518.mount: Deactivated successfully. Nov 6 00:24:47.482607 kubelet[3155]: E1106 00:24:47.482567 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:47.700359 containerd[1697]: time="2025-11-06T00:24:47.700305433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:47.710110 containerd[1697]: time="2025-11-06T00:24:47.710033688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:24:47.714308 containerd[1697]: time="2025-11-06T00:24:47.714286314Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:47.717652 containerd[1697]: time="2025-11-06T00:24:47.717611429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:47.717986 containerd[1697]: time="2025-11-06T00:24:47.717897715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.328507942s" Nov 6 00:24:47.717986 containerd[1697]: time="2025-11-06T00:24:47.717923701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:24:47.719203 containerd[1697]: time="2025-11-06T00:24:47.718805034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:24:47.734350 containerd[1697]: time="2025-11-06T00:24:47.734316410Z" level=info msg="CreateContainer within sandbox \"9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:24:47.755741 containerd[1697]: time="2025-11-06T00:24:47.755681343Z" level=info msg="Container 6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:47.772702 containerd[1697]: time="2025-11-06T00:24:47.772674565Z" level=info msg="CreateContainer within sandbox \"9e44da84d35ffab1c3c136b8b76267943f3cd89038f511a45d6b8481c733356e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1\"" Nov 6 00:24:47.773461 containerd[1697]: time="2025-11-06T00:24:47.773037550Z" level=info msg="StartContainer for \"6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1\"" Nov 6 00:24:47.774330 containerd[1697]: time="2025-11-06T00:24:47.774304978Z" level=info msg="connecting to shim 6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1" address="unix:///run/containerd/s/d2cd4d9dfa948a5eb77abe4186de02b58f3d75c70aaaa6afbe8a350ec315392f" protocol=ttrpc version=3 Nov 6 00:24:47.791483 systemd[1]: Started cri-containerd-6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1.scope - libcontainer container 6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1. Nov 6 00:24:47.829589 containerd[1697]: time="2025-11-06T00:24:47.829556529Z" level=info msg="StartContainer for \"6efb789b86b77dde5cc6e4f1a38693e70aaac40cdb352b3a3e45bcedb0b3cbc1\" returns successfully" Nov 6 00:24:48.260807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565089158.mount: Deactivated successfully. Nov 6 00:24:48.601987 kubelet[3155]: I1106 00:24:48.601883 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b74db6dc9-tzqmz" podStartSLOduration=2.271482878 podStartE2EDuration="4.601867807s" podCreationTimestamp="2025-11-06 00:24:44 +0000 UTC" firstStartedPulling="2025-11-06 00:24:45.388274911 +0000 UTC m=+28.983789410" lastFinishedPulling="2025-11-06 00:24:47.718659842 +0000 UTC m=+31.314174339" observedRunningTime="2025-11-06 00:24:48.60155785 +0000 UTC m=+32.197072352" watchObservedRunningTime="2025-11-06 00:24:48.601867807 +0000 UTC m=+32.197382302" Nov 6 00:24:48.676033 kubelet[3155]: E1106 00:24:48.675999 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676033 kubelet[3155]: W1106 00:24:48.676026 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676211 kubelet[3155]: E1106 00:24:48.676044 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676211 kubelet[3155]: E1106 00:24:48.676145 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676211 kubelet[3155]: W1106 00:24:48.676151 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676211 kubelet[3155]: E1106 00:24:48.676159 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676357 kubelet[3155]: E1106 00:24:48.676251 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676357 kubelet[3155]: W1106 00:24:48.676260 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676357 kubelet[3155]: E1106 00:24:48.676266 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676463 kubelet[3155]: E1106 00:24:48.676410 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676463 kubelet[3155]: W1106 00:24:48.676415 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676463 kubelet[3155]: E1106 00:24:48.676422 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676560 kubelet[3155]: E1106 00:24:48.676516 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676560 kubelet[3155]: W1106 00:24:48.676521 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676560 kubelet[3155]: E1106 00:24:48.676527 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676660 kubelet[3155]: E1106 00:24:48.676604 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676660 kubelet[3155]: W1106 00:24:48.676608 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676660 kubelet[3155]: E1106 00:24:48.676614 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676759 kubelet[3155]: E1106 00:24:48.676689 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676759 kubelet[3155]: W1106 00:24:48.676693 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676759 kubelet[3155]: E1106 00:24:48.676699 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676852 kubelet[3155]: E1106 00:24:48.676773 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676852 kubelet[3155]: W1106 00:24:48.676778 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676852 kubelet[3155]: E1106 00:24:48.676783 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676950 kubelet[3155]: E1106 00:24:48.676864 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676950 kubelet[3155]: W1106 00:24:48.676869 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676950 kubelet[3155]: E1106 00:24:48.676875 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.676950 kubelet[3155]: E1106 00:24:48.676949 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.676950 kubelet[3155]: W1106 00:24:48.676953 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.676950 kubelet[3155]: E1106 00:24:48.676960 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.677147 kubelet[3155]: E1106 00:24:48.677035 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.677147 kubelet[3155]: W1106 00:24:48.677039 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.677147 kubelet[3155]: E1106 00:24:48.677045 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.677147 kubelet[3155]: E1106 00:24:48.677118 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.677147 kubelet[3155]: W1106 00:24:48.677123 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.677147 kubelet[3155]: E1106 00:24:48.677128 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.677333 kubelet[3155]: E1106 00:24:48.677208 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.677333 kubelet[3155]: W1106 00:24:48.677213 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.677333 kubelet[3155]: E1106 00:24:48.677218 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.677333 kubelet[3155]: E1106 00:24:48.677291 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.677333 kubelet[3155]: W1106 00:24:48.677297 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.677333 kubelet[3155]: E1106 00:24:48.677302 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.677527 kubelet[3155]: E1106 00:24:48.677394 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.677527 kubelet[3155]: W1106 00:24:48.677399 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.677527 kubelet[3155]: E1106 00:24:48.677405 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.689687 kubelet[3155]: E1106 00:24:48.689665 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.689687 kubelet[3155]: W1106 00:24:48.689682 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.689872 kubelet[3155]: E1106 00:24:48.689698 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.689872 kubelet[3155]: E1106 00:24:48.689853 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.689872 kubelet[3155]: W1106 00:24:48.689859 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.689872 kubelet[3155]: E1106 00:24:48.689866 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.689989 kubelet[3155]: E1106 00:24:48.689981 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.689989 kubelet[3155]: W1106 00:24:48.689986 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.690135 kubelet[3155]: E1106 00:24:48.690005 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.690204 kubelet[3155]: E1106 00:24:48.690190 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.690204 kubelet[3155]: W1106 00:24:48.690201 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.690255 kubelet[3155]: E1106 00:24:48.690209 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.690430 kubelet[3155]: E1106 00:24:48.690413 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.690430 kubelet[3155]: W1106 00:24:48.690428 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.690488 kubelet[3155]: E1106 00:24:48.690448 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.690619 kubelet[3155]: E1106 00:24:48.690574 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.690619 kubelet[3155]: W1106 00:24:48.690582 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.690619 kubelet[3155]: E1106 00:24:48.690587 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.690739 kubelet[3155]: E1106 00:24:48.690730 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.690739 kubelet[3155]: W1106 00:24:48.690737 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.690799 kubelet[3155]: E1106 00:24:48.690743 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691140 kubelet[3155]: E1106 00:24:48.691110 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691140 kubelet[3155]: W1106 00:24:48.691135 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691222 kubelet[3155]: E1106 00:24:48.691145 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691304 kubelet[3155]: E1106 00:24:48.691281 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691304 kubelet[3155]: W1106 00:24:48.691298 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691416 kubelet[3155]: E1106 00:24:48.691305 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691416 kubelet[3155]: E1106 00:24:48.691411 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691416 kubelet[3155]: W1106 00:24:48.691416 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691477 kubelet[3155]: E1106 00:24:48.691421 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691546 kubelet[3155]: E1106 00:24:48.691536 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691546 kubelet[3155]: W1106 00:24:48.691543 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691594 kubelet[3155]: E1106 00:24:48.691549 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691698 kubelet[3155]: E1106 00:24:48.691690 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691698 kubelet[3155]: W1106 00:24:48.691696 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691744 kubelet[3155]: E1106 00:24:48.691701 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691809 kubelet[3155]: E1106 00:24:48.691794 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691809 kubelet[3155]: W1106 00:24:48.691807 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.691855 kubelet[3155]: E1106 00:24:48.691813 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.691965 kubelet[3155]: E1106 00:24:48.691946 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.691965 kubelet[3155]: W1106 00:24:48.691962 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.692010 kubelet[3155]: E1106 00:24:48.691970 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.692211 kubelet[3155]: E1106 00:24:48.692202 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.692211 kubelet[3155]: W1106 00:24:48.692210 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.692283 kubelet[3155]: E1106 00:24:48.692216 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.692311 kubelet[3155]: E1106 00:24:48.692305 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.692385 kubelet[3155]: W1106 00:24:48.692312 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.692385 kubelet[3155]: E1106 00:24:48.692319 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.692445 kubelet[3155]: E1106 00:24:48.692442 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.692465 kubelet[3155]: W1106 00:24:48.692447 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.692465 kubelet[3155]: E1106 00:24:48.692454 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:48.692738 kubelet[3155]: E1106 00:24:48.692724 3155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:48.692738 kubelet[3155]: W1106 00:24:48.692731 3155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:48.692809 kubelet[3155]: E1106 00:24:48.692737 3155 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:49.063497 containerd[1697]: time="2025-11-06T00:24:49.063463051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:49.065687 containerd[1697]: time="2025-11-06T00:24:49.065624506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:24:49.068365 containerd[1697]: time="2025-11-06T00:24:49.068262767Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:49.075428 containerd[1697]: time="2025-11-06T00:24:49.075399998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:49.076141 containerd[1697]: time="2025-11-06T00:24:49.075731780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.356899054s" Nov 6 00:24:49.076141 containerd[1697]: time="2025-11-06T00:24:49.075760161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:24:49.083441 containerd[1697]: time="2025-11-06T00:24:49.083421348Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:24:49.101458 containerd[1697]: time="2025-11-06T00:24:49.101430651Z" level=info msg="Container 5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:49.124662 containerd[1697]: time="2025-11-06T00:24:49.124633090Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\"" Nov 6 00:24:49.126976 containerd[1697]: time="2025-11-06T00:24:49.126913973Z" level=info msg="StartContainer for \"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\"" Nov 6 00:24:49.128699 containerd[1697]: time="2025-11-06T00:24:49.128676245Z" level=info msg="connecting to shim 5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1" address="unix:///run/containerd/s/b77a864de214baca77aa79960b0e487d2a966213fcace2c380e0d73bcab26b83" protocol=ttrpc version=3 Nov 6 00:24:49.150499 systemd[1]: Started cri-containerd-5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1.scope - libcontainer container 5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1. Nov 6 00:24:49.182926 containerd[1697]: time="2025-11-06T00:24:49.182898755Z" level=info msg="StartContainer for \"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\" returns successfully" Nov 6 00:24:49.185988 systemd[1]: cri-containerd-5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1.scope: Deactivated successfully. Nov 6 00:24:49.189673 containerd[1697]: time="2025-11-06T00:24:49.189328039Z" level=info msg="received exit event container_id:\"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\" id:\"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\" pid:3852 exited_at:{seconds:1762388689 nanos:189001172}" Nov 6 00:24:49.189763 containerd[1697]: time="2025-11-06T00:24:49.189403703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\" id:\"5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1\" pid:3852 exited_at:{seconds:1762388689 nanos:189001172}" Nov 6 00:24:49.205023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a689ff730513438b8a28c09a555c679dc2604ed7a2fcd3f40f874d3a37dcaf1-rootfs.mount: Deactivated successfully. Nov 6 00:24:49.483222 kubelet[3155]: E1106 00:24:49.483078 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:49.587174 kubelet[3155]: I1106 00:24:49.587142 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:24:50.590628 containerd[1697]: time="2025-11-06T00:24:50.590517956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:24:51.482247 kubelet[3155]: E1106 00:24:51.482201 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:53.483137 kubelet[3155]: E1106 00:24:53.483096 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:54.043740 containerd[1697]: time="2025-11-06T00:24:54.043702913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:54.045691 containerd[1697]: time="2025-11-06T00:24:54.045656368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:24:54.047908 containerd[1697]: time="2025-11-06T00:24:54.047870502Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:54.051028 containerd[1697]: time="2025-11-06T00:24:54.050991290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:54.051563 containerd[1697]: time="2025-11-06T00:24:54.051480219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.460892423s" Nov 6 00:24:54.051563 containerd[1697]: time="2025-11-06T00:24:54.051505487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:24:54.057560 containerd[1697]: time="2025-11-06T00:24:54.057538119Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:24:54.076361 containerd[1697]: time="2025-11-06T00:24:54.075980679Z" level=info msg="Container fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:54.090093 containerd[1697]: time="2025-11-06T00:24:54.090071961Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\"" Nov 6 00:24:54.090601 containerd[1697]: time="2025-11-06T00:24:54.090418382Z" level=info msg="StartContainer for \"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\"" Nov 6 00:24:54.091892 containerd[1697]: time="2025-11-06T00:24:54.091862127Z" level=info msg="connecting to shim fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8" address="unix:///run/containerd/s/b77a864de214baca77aa79960b0e487d2a966213fcace2c380e0d73bcab26b83" protocol=ttrpc version=3 Nov 6 00:24:54.112475 systemd[1]: Started cri-containerd-fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8.scope - libcontainer container fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8. Nov 6 00:24:54.144354 containerd[1697]: time="2025-11-06T00:24:54.143720681Z" level=info msg="StartContainer for \"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\" returns successfully" Nov 6 00:24:55.368940 containerd[1697]: time="2025-11-06T00:24:55.368872679Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:24:55.370497 systemd[1]: cri-containerd-fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8.scope: Deactivated successfully. Nov 6 00:24:55.370752 systemd[1]: cri-containerd-fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8.scope: Consumed 379ms CPU time, 199.4M memory peak, 171.3M written to disk. Nov 6 00:24:55.373241 containerd[1697]: time="2025-11-06T00:24:55.373207506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\" id:\"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\" pid:3913 exited_at:{seconds:1762388695 nanos:372920134}" Nov 6 00:24:55.374185 containerd[1697]: time="2025-11-06T00:24:55.374036868Z" level=info msg="received exit event container_id:\"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\" id:\"fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8\" pid:3913 exited_at:{seconds:1762388695 nanos:372920134}" Nov 6 00:24:55.392682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc1fa9485aae254e9d6f90336c2dac7e851ef277aefebe04dbabb44748febcd8-rootfs.mount: Deactivated successfully. Nov 6 00:24:55.442127 kubelet[3155]: I1106 00:24:55.442100 3155 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:24:55.650142 systemd[1]: Created slice kubepods-burstable-pod69a504f7_574b_4158_b560_e00f616f3ecf.slice - libcontainer container kubepods-burstable-pod69a504f7_574b_4158_b560_e00f616f3ecf.slice. Nov 6 00:24:55.737840 kubelet[3155]: I1106 00:24:55.737815 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a504f7-574b-4158-b560-e00f616f3ecf-config-volume\") pod \"coredns-674b8bbfcf-sqndk\" (UID: \"69a504f7-574b-4158-b560-e00f616f3ecf\") " pod="kube-system/coredns-674b8bbfcf-sqndk" Nov 6 00:24:55.737954 kubelet[3155]: I1106 00:24:55.737886 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcdd\" (UniqueName: \"kubernetes.io/projected/69a504f7-574b-4158-b560-e00f616f3ecf-kube-api-access-gtcdd\") pod \"coredns-674b8bbfcf-sqndk\" (UID: \"69a504f7-574b-4158-b560-e00f616f3ecf\") " pod="kube-system/coredns-674b8bbfcf-sqndk" Nov 6 00:24:55.814449 systemd[1]: Created slice kubepods-besteffort-pod8c6ee567_f5e8_4c03_b991_477d0718cfe0.slice - libcontainer container kubepods-besteffort-pod8c6ee567_f5e8_4c03_b991_477d0718cfe0.slice. Nov 6 00:24:55.838358 kubelet[3155]: I1106 00:24:55.838186 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdcb\" (UniqueName: \"kubernetes.io/projected/8c6ee567-f5e8-4c03-b991-477d0718cfe0-kube-api-access-ljdcb\") pod \"whisker-7cb6d9fbcb-ld28s\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " pod="calico-system/whisker-7cb6d9fbcb-ld28s" Nov 6 00:24:55.838358 kubelet[3155]: I1106 00:24:55.838225 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-ca-bundle\") pod \"whisker-7cb6d9fbcb-ld28s\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " pod="calico-system/whisker-7cb6d9fbcb-ld28s" Nov 6 00:24:55.838358 kubelet[3155]: I1106 00:24:55.838245 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-backend-key-pair\") pod \"whisker-7cb6d9fbcb-ld28s\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " pod="calico-system/whisker-7cb6d9fbcb-ld28s" Nov 6 00:24:55.954698 containerd[1697]: time="2025-11-06T00:24:55.954603003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqndk,Uid:69a504f7-574b-4158-b560-e00f616f3ecf,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:55.964207 systemd[1]: Created slice kubepods-burstable-pod1d840568_04dc_45c8_8e7f_aecf5c0782c2.slice - libcontainer container kubepods-burstable-pod1d840568_04dc_45c8_8e7f_aecf5c0782c2.slice. Nov 6 00:24:56.039654 kubelet[3155]: I1106 00:24:56.039616 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcw4\" (UniqueName: \"kubernetes.io/projected/1d840568-04dc-45c8-8e7f-aecf5c0782c2-kube-api-access-btcw4\") pod \"coredns-674b8bbfcf-5zbf8\" (UID: \"1d840568-04dc-45c8-8e7f-aecf5c0782c2\") " pod="kube-system/coredns-674b8bbfcf-5zbf8" Nov 6 00:24:56.039654 kubelet[3155]: I1106 00:24:56.039656 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d840568-04dc-45c8-8e7f-aecf5c0782c2-config-volume\") pod \"coredns-674b8bbfcf-5zbf8\" (UID: \"1d840568-04dc-45c8-8e7f-aecf5c0782c2\") " pod="kube-system/coredns-674b8bbfcf-5zbf8" Nov 6 00:24:56.244252 containerd[1697]: time="2025-11-06T00:24:56.244164678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb6d9fbcb-ld28s,Uid:8c6ee567-f5e8-4c03-b991-477d0718cfe0,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:56.261193 systemd[1]: Created slice kubepods-besteffort-pod474a2a28_e33d_439d_b39a_dfe9428b38e0.slice - libcontainer container kubepods-besteffort-pod474a2a28_e33d_439d_b39a_dfe9428b38e0.slice. Nov 6 00:24:56.279588 containerd[1697]: time="2025-11-06T00:24:56.279554210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zbf8,Uid:1d840568-04dc-45c8-8e7f-aecf5c0782c2,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:56.280212 systemd[1]: Created slice kubepods-besteffort-poda9d7d62c_06ef_4717_9bd7_eae5448191dc.slice - libcontainer container kubepods-besteffort-poda9d7d62c_06ef_4717_9bd7_eae5448191dc.slice. Nov 6 00:24:56.285437 containerd[1697]: time="2025-11-06T00:24:56.285083590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wmdqb,Uid:a9d7d62c-06ef-4717-9bd7-eae5448191dc,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:56.293063 systemd[1]: Created slice kubepods-besteffort-pod463dc82a_762a_4a15_83ea_246d11d83d8a.slice - libcontainer container kubepods-besteffort-pod463dc82a_762a_4a15_83ea_246d11d83d8a.slice. Nov 6 00:24:56.299157 systemd[1]: Created slice kubepods-besteffort-pod7fa773ed_071b_40cb_8a90_29ff43899047.slice - libcontainer container kubepods-besteffort-pod7fa773ed_071b_40cb_8a90_29ff43899047.slice. Nov 6 00:24:56.306503 systemd[1]: Created slice kubepods-besteffort-podb7b98906_e4ec_4320_b242_7b5d2e64e1f1.slice - libcontainer container kubepods-besteffort-podb7b98906_e4ec_4320_b242_7b5d2e64e1f1.slice. Nov 6 00:24:56.341807 kubelet[3155]: I1106 00:24:56.341711 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/474a2a28-e33d-439d-b39a-dfe9428b38e0-calico-apiserver-certs\") pod \"calico-apiserver-6864945c74-qs8mf\" (UID: \"474a2a28-e33d-439d-b39a-dfe9428b38e0\") " pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" Nov 6 00:24:56.341807 kubelet[3155]: I1106 00:24:56.341752 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqdw\" (UniqueName: \"kubernetes.io/projected/7fa773ed-071b-40cb-8a90-29ff43899047-kube-api-access-dbqdw\") pod \"goldmane-666569f655-ls9gq\" (UID: \"7fa773ed-071b-40cb-8a90-29ff43899047\") " pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.341807 kubelet[3155]: I1106 00:24:56.341779 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7b98906-e4ec-4320-b242-7b5d2e64e1f1-calico-apiserver-certs\") pod \"calico-apiserver-6864945c74-pmjl7\" (UID: \"b7b98906-e4ec-4320-b242-7b5d2e64e1f1\") " pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" Nov 6 00:24:56.341943 kubelet[3155]: I1106 00:24:56.341818 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa773ed-071b-40cb-8a90-29ff43899047-config\") pod \"goldmane-666569f655-ls9gq\" (UID: \"7fa773ed-071b-40cb-8a90-29ff43899047\") " pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.341943 kubelet[3155]: I1106 00:24:56.341834 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkrb\" (UniqueName: \"kubernetes.io/projected/b7b98906-e4ec-4320-b242-7b5d2e64e1f1-kube-api-access-qlkrb\") pod \"calico-apiserver-6864945c74-pmjl7\" (UID: \"b7b98906-e4ec-4320-b242-7b5d2e64e1f1\") " pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" Nov 6 00:24:56.341943 kubelet[3155]: I1106 00:24:56.341855 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzkx\" (UniqueName: \"kubernetes.io/projected/474a2a28-e33d-439d-b39a-dfe9428b38e0-kube-api-access-mmzkx\") pod \"calico-apiserver-6864945c74-qs8mf\" (UID: \"474a2a28-e33d-439d-b39a-dfe9428b38e0\") " pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" Nov 6 00:24:56.341943 kubelet[3155]: I1106 00:24:56.341873 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7fa773ed-071b-40cb-8a90-29ff43899047-goldmane-key-pair\") pod \"goldmane-666569f655-ls9gq\" (UID: \"7fa773ed-071b-40cb-8a90-29ff43899047\") " pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.341943 kubelet[3155]: I1106 00:24:56.341895 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa773ed-071b-40cb-8a90-29ff43899047-goldmane-ca-bundle\") pod \"goldmane-666569f655-ls9gq\" (UID: \"7fa773ed-071b-40cb-8a90-29ff43899047\") " pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.342239 kubelet[3155]: I1106 00:24:56.341919 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/463dc82a-762a-4a15-83ea-246d11d83d8a-tigera-ca-bundle\") pod \"calico-kube-controllers-5679b55d7c-7hshc\" (UID: \"463dc82a-762a-4a15-83ea-246d11d83d8a\") " pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" Nov 6 00:24:56.342239 kubelet[3155]: I1106 00:24:56.341936 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6pb\" (UniqueName: \"kubernetes.io/projected/463dc82a-762a-4a15-83ea-246d11d83d8a-kube-api-access-tk6pb\") pod \"calico-kube-controllers-5679b55d7c-7hshc\" (UID: \"463dc82a-762a-4a15-83ea-246d11d83d8a\") " pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" Nov 6 00:24:56.402913 containerd[1697]: time="2025-11-06T00:24:56.402887003Z" level=error msg="Failed to destroy network for sandbox \"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.404730 containerd[1697]: time="2025-11-06T00:24:56.404683363Z" level=error msg="Failed to destroy network for sandbox \"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.405294 systemd[1]: run-netns-cni\x2d184b0996\x2d25d9\x2d83c6\x2d4931\x2d95863cf7c166.mount: Deactivated successfully. Nov 6 00:24:56.408193 systemd[1]: run-netns-cni\x2d1b918efd\x2dbdd2\x2d4bbe\x2dfeff\x2dc1e327aa9fd8.mount: Deactivated successfully. Nov 6 00:24:56.408281 containerd[1697]: time="2025-11-06T00:24:56.408233112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb6d9fbcb-ld28s,Uid:8c6ee567-f5e8-4c03-b991-477d0718cfe0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.409202 kubelet[3155]: E1106 00:24:56.408472 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.409202 kubelet[3155]: E1106 00:24:56.408530 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cb6d9fbcb-ld28s" Nov 6 00:24:56.409202 kubelet[3155]: E1106 00:24:56.408549 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cb6d9fbcb-ld28s" Nov 6 00:24:56.410238 kubelet[3155]: E1106 00:24:56.408593 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cb6d9fbcb-ld28s_calico-system(8c6ee567-f5e8-4c03-b991-477d0718cfe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cb6d9fbcb-ld28s_calico-system(8c6ee567-f5e8-4c03-b991-477d0718cfe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c045d45d09cf1707975d132aa186a865c4254b8679f9c21073d4cf62ab3e8a3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cb6d9fbcb-ld28s" podUID="8c6ee567-f5e8-4c03-b991-477d0718cfe0" Nov 6 00:24:56.411715 containerd[1697]: time="2025-11-06T00:24:56.411359532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wmdqb,Uid:a9d7d62c-06ef-4717-9bd7-eae5448191dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.411897 kubelet[3155]: E1106 00:24:56.411877 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.412025 kubelet[3155]: E1106 00:24:56.411977 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:56.412025 kubelet[3155]: E1106 00:24:56.411996 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wmdqb" Nov 6 00:24:56.412149 kubelet[3155]: E1106 00:24:56.412122 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1454aa511b49c70a07271e5b4f45f02d5dfc2b119c9ac396b1edc0c672452303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:24:56.414019 containerd[1697]: time="2025-11-06T00:24:56.413981007Z" level=error msg="Failed to destroy network for sandbox \"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.416089 systemd[1]: run-netns-cni\x2dcdbbc2a5\x2d36c0\x2dd18c\x2d1c04\x2dcaa406e3a5ad.mount: Deactivated successfully. Nov 6 00:24:56.418831 containerd[1697]: time="2025-11-06T00:24:56.418785090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqndk,Uid:69a504f7-574b-4158-b560-e00f616f3ecf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.419363 kubelet[3155]: E1106 00:24:56.419075 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.419363 kubelet[3155]: E1106 00:24:56.419117 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sqndk" Nov 6 00:24:56.419363 kubelet[3155]: E1106 00:24:56.419134 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sqndk" Nov 6 00:24:56.419461 kubelet[3155]: E1106 00:24:56.419172 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sqndk_kube-system(69a504f7-574b-4158-b560-e00f616f3ecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sqndk_kube-system(69a504f7-574b-4158-b560-e00f616f3ecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af9d5d406bb3b514e3cb86d9b77c0592cea61a55e3560388d157279747412074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sqndk" podUID="69a504f7-574b-4158-b560-e00f616f3ecf" Nov 6 00:24:56.425040 containerd[1697]: time="2025-11-06T00:24:56.425008351Z" level=error msg="Failed to destroy network for sandbox \"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.427050 systemd[1]: run-netns-cni\x2d2c6412f3\x2ddb60\x2dec56\x2db9ec\x2d9198eecd459b.mount: Deactivated successfully. Nov 6 00:24:56.427588 containerd[1697]: time="2025-11-06T00:24:56.427324164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zbf8,Uid:1d840568-04dc-45c8-8e7f-aecf5c0782c2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.427672 kubelet[3155]: E1106 00:24:56.427644 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.427707 kubelet[3155]: E1106 00:24:56.427676 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5zbf8" Nov 6 00:24:56.427707 kubelet[3155]: E1106 00:24:56.427693 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5zbf8" Nov 6 00:24:56.427757 kubelet[3155]: E1106 00:24:56.427729 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5zbf8_kube-system(1d840568-04dc-45c8-8e7f-aecf5c0782c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5zbf8_kube-system(1d840568-04dc-45c8-8e7f-aecf5c0782c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7c17005ee1e6b201cd5860b7e85b63422d8d9cefa9d7b269f7af14e6b546e41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5zbf8" podUID="1d840568-04dc-45c8-8e7f-aecf5c0782c2" Nov 6 00:24:56.574129 containerd[1697]: time="2025-11-06T00:24:56.574103391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-qs8mf,Uid:474a2a28-e33d-439d-b39a-dfe9428b38e0,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:56.595572 containerd[1697]: time="2025-11-06T00:24:56.595538615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5679b55d7c-7hshc,Uid:463dc82a-762a-4a15-83ea-246d11d83d8a,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:56.603268 containerd[1697]: time="2025-11-06T00:24:56.603191664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls9gq,Uid:7fa773ed-071b-40cb-8a90-29ff43899047,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:56.613133 containerd[1697]: time="2025-11-06T00:24:56.613006612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:24:56.618420 containerd[1697]: time="2025-11-06T00:24:56.618266856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-pmjl7,Uid:b7b98906-e4ec-4320-b242-7b5d2e64e1f1,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:56.645593 containerd[1697]: time="2025-11-06T00:24:56.645567612Z" level=error msg="Failed to destroy network for sandbox \"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.650962 containerd[1697]: time="2025-11-06T00:24:56.650886511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-qs8mf,Uid:474a2a28-e33d-439d-b39a-dfe9428b38e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.651355 kubelet[3155]: E1106 00:24:56.651314 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.651701 kubelet[3155]: E1106 00:24:56.651371 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" Nov 6 00:24:56.651701 kubelet[3155]: E1106 00:24:56.651389 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" Nov 6 00:24:56.651701 kubelet[3155]: E1106 00:24:56.651424 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"738efd74636e130b5c892b0aae1cf04bfa9f6e49f05af82fc8315c5a020e4a78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:24:56.700450 containerd[1697]: time="2025-11-06T00:24:56.700409439Z" level=error msg="Failed to destroy network for sandbox \"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.703217 containerd[1697]: time="2025-11-06T00:24:56.703177081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5679b55d7c-7hshc,Uid:463dc82a-762a-4a15-83ea-246d11d83d8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.703751 kubelet[3155]: E1106 00:24:56.703518 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.703751 kubelet[3155]: E1106 00:24:56.703564 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" Nov 6 00:24:56.703751 kubelet[3155]: E1106 00:24:56.703591 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" Nov 6 00:24:56.703870 kubelet[3155]: E1106 00:24:56.703644 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"678b082826f538f735316048dc681eaae1a6083a40467140dc39c75d6843a0f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:24:56.704458 containerd[1697]: time="2025-11-06T00:24:56.704373025Z" level=error msg="Failed to destroy network for sandbox \"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.706995 containerd[1697]: time="2025-11-06T00:24:56.706966512Z" level=error msg="Failed to destroy network for sandbox \"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.707437 containerd[1697]: time="2025-11-06T00:24:56.707417660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls9gq,Uid:7fa773ed-071b-40cb-8a90-29ff43899047,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.707624 kubelet[3155]: E1106 00:24:56.707601 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.707674 kubelet[3155]: E1106 00:24:56.707643 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.707674 kubelet[3155]: E1106 00:24:56.707662 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ls9gq" Nov 6 00:24:56.707730 kubelet[3155]: E1106 00:24:56.707700 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"defa8a578f30fc0d6e662f21dd5f51672d53cd7abe5dfe9eb8bad13cd08e9317\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:24:56.710049 containerd[1697]: time="2025-11-06T00:24:56.710014090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-pmjl7,Uid:b7b98906-e4ec-4320-b242-7b5d2e64e1f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.710183 kubelet[3155]: E1106 00:24:56.710164 3155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:56.710227 kubelet[3155]: E1106 00:24:56.710214 3155 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" Nov 6 00:24:56.710253 kubelet[3155]: E1106 00:24:56.710235 3155 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" Nov 6 00:24:56.710305 kubelet[3155]: E1106 00:24:56.710287 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"455455f3e631eddce8e944d35f80f62bc12049a76079002ce6f5d780f1401070\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:03.191530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827690522.mount: Deactivated successfully. Nov 6 00:25:03.222847 containerd[1697]: time="2025-11-06T00:25:03.222806240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:03.224773 containerd[1697]: time="2025-11-06T00:25:03.224739331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:25:03.228734 containerd[1697]: time="2025-11-06T00:25:03.228705611Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:03.231630 containerd[1697]: time="2025-11-06T00:25:03.231590773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:03.231919 containerd[1697]: time="2025-11-06T00:25:03.231895235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.618859651s" Nov 6 00:25:03.231962 containerd[1697]: time="2025-11-06T00:25:03.231928356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:25:03.247657 containerd[1697]: time="2025-11-06T00:25:03.247627711Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:25:03.270463 containerd[1697]: time="2025-11-06T00:25:03.270440288Z" level=info msg="Container 5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:03.274393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776894118.mount: Deactivated successfully. Nov 6 00:25:03.286935 containerd[1697]: time="2025-11-06T00:25:03.286908628Z" level=info msg="CreateContainer within sandbox \"a24b3e7feeed3e41b53ce3782e30bfb7bbc850c3d3a1b0e825c92523e01d670d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\"" Nov 6 00:25:03.288370 containerd[1697]: time="2025-11-06T00:25:03.287312712Z" level=info msg="StartContainer for \"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\"" Nov 6 00:25:03.288779 containerd[1697]: time="2025-11-06T00:25:03.288752296Z" level=info msg="connecting to shim 5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7" address="unix:///run/containerd/s/b77a864de214baca77aa79960b0e487d2a966213fcace2c380e0d73bcab26b83" protocol=ttrpc version=3 Nov 6 00:25:03.304481 systemd[1]: Started cri-containerd-5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7.scope - libcontainer container 5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7. Nov 6 00:25:03.335700 containerd[1697]: time="2025-11-06T00:25:03.335676833Z" level=info msg="StartContainer for \"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" returns successfully" Nov 6 00:25:03.647827 kubelet[3155]: I1106 00:25:03.647753 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gbft2" podStartSLOduration=1.517634874 podStartE2EDuration="18.64765611s" podCreationTimestamp="2025-11-06 00:24:45 +0000 UTC" firstStartedPulling="2025-11-06 00:24:46.102658523 +0000 UTC m=+29.698173013" lastFinishedPulling="2025-11-06 00:25:03.232679755 +0000 UTC m=+46.828194249" observedRunningTime="2025-11-06 00:25:03.646652662 +0000 UTC m=+47.242167256" watchObservedRunningTime="2025-11-06 00:25:03.64765611 +0000 UTC m=+47.243170610" Nov 6 00:25:03.897989 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:25:03.898092 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:25:04.085480 kubelet[3155]: I1106 00:25:04.085447 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-ca-bundle\") pod \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " Nov 6 00:25:04.085617 kubelet[3155]: I1106 00:25:04.085494 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdcb\" (UniqueName: \"kubernetes.io/projected/8c6ee567-f5e8-4c03-b991-477d0718cfe0-kube-api-access-ljdcb\") pod \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " Nov 6 00:25:04.085617 kubelet[3155]: I1106 00:25:04.085513 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-backend-key-pair\") pod \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\" (UID: \"8c6ee567-f5e8-4c03-b991-477d0718cfe0\") " Nov 6 00:25:04.088378 kubelet[3155]: I1106 00:25:04.087917 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8c6ee567-f5e8-4c03-b991-477d0718cfe0" (UID: "8c6ee567-f5e8-4c03-b991-477d0718cfe0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:25:04.090515 kubelet[3155]: I1106 00:25:04.090488 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8c6ee567-f5e8-4c03-b991-477d0718cfe0" (UID: "8c6ee567-f5e8-4c03-b991-477d0718cfe0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:25:04.091318 kubelet[3155]: I1106 00:25:04.091292 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6ee567-f5e8-4c03-b991-477d0718cfe0-kube-api-access-ljdcb" (OuterVolumeSpecName: "kube-api-access-ljdcb") pod "8c6ee567-f5e8-4c03-b991-477d0718cfe0" (UID: "8c6ee567-f5e8-4c03-b991-477d0718cfe0"). InnerVolumeSpecName "kube-api-access-ljdcb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:25:04.096745 containerd[1697]: time="2025-11-06T00:25:04.096654306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"1aec53822e3f95bb5f9ec415021356b4f26251f2e5a5b4364226efeeba314922\" pid:4230 exit_status:1 exited_at:{seconds:1762388704 nanos:96393420}" Nov 6 00:25:04.171993 containerd[1697]: time="2025-11-06T00:25:04.171964502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"b91538c8966d402b054b7bd6b3a0b37e0e4f3087fb7b162a60c737e201e72183\" pid:4263 exit_status:1 exited_at:{seconds:1762388704 nanos:171618876}" Nov 6 00:25:04.186287 kubelet[3155]: I1106 00:25:04.186265 3155 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljdcb\" (UniqueName: \"kubernetes.io/projected/8c6ee567-f5e8-4c03-b991-477d0718cfe0-kube-api-access-ljdcb\") on node \"ci-4459.1.0-n-1b1a1d3a2e\" DevicePath \"\"" Nov 6 00:25:04.186287 kubelet[3155]: I1106 00:25:04.186286 3155 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-1b1a1d3a2e\" DevicePath \"\"" Nov 6 00:25:04.186400 kubelet[3155]: I1106 00:25:04.186294 3155 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6ee567-f5e8-4c03-b991-477d0718cfe0-whisker-ca-bundle\") on node \"ci-4459.1.0-n-1b1a1d3a2e\" DevicePath \"\"" Nov 6 00:25:04.190115 systemd[1]: var-lib-kubelet-pods-8c6ee567\x2df5e8\x2d4c03\x2db991\x2d477d0718cfe0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljdcb.mount: Deactivated successfully. Nov 6 00:25:04.190206 systemd[1]: var-lib-kubelet-pods-8c6ee567\x2df5e8\x2d4c03\x2db991\x2d477d0718cfe0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:25:04.487885 systemd[1]: Removed slice kubepods-besteffort-pod8c6ee567_f5e8_4c03_b991_477d0718cfe0.slice - libcontainer container kubepods-besteffort-pod8c6ee567_f5e8_4c03_b991_477d0718cfe0.slice. Nov 6 00:25:04.714916 systemd[1]: Created slice kubepods-besteffort-pod920cabd1_828e_4d7e_9f68_76150c797ff9.slice - libcontainer container kubepods-besteffort-pod920cabd1_828e_4d7e_9f68_76150c797ff9.slice. Nov 6 00:25:04.724227 containerd[1697]: time="2025-11-06T00:25:04.724195208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"9181c6dc17cd55c72b416fbda9397c33ab68c1fdd7e382a0d6725a6379717ab4\" pid:4298 exit_status:1 exited_at:{seconds:1762388704 nanos:723975504}" Nov 6 00:25:04.789598 kubelet[3155]: I1106 00:25:04.789515 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/920cabd1-828e-4d7e-9f68-76150c797ff9-whisker-backend-key-pair\") pod \"whisker-5b4f46bb86-cxgxc\" (UID: \"920cabd1-828e-4d7e-9f68-76150c797ff9\") " pod="calico-system/whisker-5b4f46bb86-cxgxc" Nov 6 00:25:04.789598 kubelet[3155]: I1106 00:25:04.789547 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920cabd1-828e-4d7e-9f68-76150c797ff9-whisker-ca-bundle\") pod \"whisker-5b4f46bb86-cxgxc\" (UID: \"920cabd1-828e-4d7e-9f68-76150c797ff9\") " pod="calico-system/whisker-5b4f46bb86-cxgxc" Nov 6 00:25:04.789598 kubelet[3155]: I1106 00:25:04.789562 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxqq\" (UniqueName: \"kubernetes.io/projected/920cabd1-828e-4d7e-9f68-76150c797ff9-kube-api-access-vxxqq\") pod \"whisker-5b4f46bb86-cxgxc\" (UID: \"920cabd1-828e-4d7e-9f68-76150c797ff9\") " pod="calico-system/whisker-5b4f46bb86-cxgxc" Nov 6 00:25:05.019506 containerd[1697]: time="2025-11-06T00:25:05.019467973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b4f46bb86-cxgxc,Uid:920cabd1-828e-4d7e-9f68-76150c797ff9,Namespace:calico-system,Attempt:0,}" Nov 6 00:25:05.143776 systemd-networkd[1342]: cali071bb7fb71b: Link UP Nov 6 00:25:05.144506 systemd-networkd[1342]: cali071bb7fb71b: Gained carrier Nov 6 00:25:05.162550 containerd[1697]: 2025-11-06 00:25:05.048 [INFO][4313] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:05.162550 containerd[1697]: 2025-11-06 00:25:05.056 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0 whisker-5b4f46bb86- calico-system 920cabd1-828e-4d7e-9f68-76150c797ff9 905 0 2025-11-06 00:25:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b4f46bb86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e whisker-5b4f46bb86-cxgxc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali071bb7fb71b [] [] }} ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-" Nov 6 00:25:05.162550 containerd[1697]: 2025-11-06 00:25:05.056 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.162550 containerd[1697]: 2025-11-06 00:25:05.074 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" HandleID="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.074 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" HandleID="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"whisker-5b4f46bb86-cxgxc", "timestamp":"2025-11-06 00:25:05.074882967 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.075 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.075 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.075 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.079 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.081 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.084 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.086 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162766 containerd[1697]: 2025-11-06 00:25:05.088 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.088 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.089 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.095 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.107 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.193/26] block=192.168.11.192/26 handle="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.107 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.193/26] handle="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.107 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:05.162984 containerd[1697]: 2025-11-06 00:25:05.107 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.193/26] IPv6=[] ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" HandleID="k8s-pod-network.3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.163124 containerd[1697]: 2025-11-06 00:25:05.109 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0", GenerateName:"whisker-5b4f46bb86-", Namespace:"calico-system", SelfLink:"", UID:"920cabd1-828e-4d7e-9f68-76150c797ff9", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 25, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b4f46bb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"whisker-5b4f46bb86-cxgxc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali071bb7fb71b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:05.163124 containerd[1697]: 2025-11-06 00:25:05.109 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.193/32] ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.163198 containerd[1697]: 2025-11-06 00:25:05.109 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali071bb7fb71b ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.163198 containerd[1697]: 2025-11-06 00:25:05.145 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.163244 containerd[1697]: 2025-11-06 00:25:05.145 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0", GenerateName:"whisker-5b4f46bb86-", Namespace:"calico-system", SelfLink:"", UID:"920cabd1-828e-4d7e-9f68-76150c797ff9", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 25, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b4f46bb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d", Pod:"whisker-5b4f46bb86-cxgxc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali071bb7fb71b", MAC:"ae:23:57:f9:44:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:05.163296 containerd[1697]: 2025-11-06 00:25:05.158 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" Namespace="calico-system" Pod="whisker-5b4f46bb86-cxgxc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-whisker--5b4f46bb86--cxgxc-eth0" Nov 6 00:25:05.198237 containerd[1697]: time="2025-11-06T00:25:05.198195877Z" level=info msg="connecting to shim 3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d" address="unix:///run/containerd/s/8511f600ea9210750f5258cd6ac6adbfcb0d40bc067327331633fd5462096c68" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:05.220490 systemd[1]: Started cri-containerd-3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d.scope - libcontainer container 3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d. Nov 6 00:25:05.304099 containerd[1697]: time="2025-11-06T00:25:05.304065787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b4f46bb86-cxgxc,Uid:920cabd1-828e-4d7e-9f68-76150c797ff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f408f9bbc4679f97a34aaab2aeb3eaf1698757b830c419f4c113c69252ea98d\"" Nov 6 00:25:05.306816 containerd[1697]: time="2025-11-06T00:25:05.306795266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:05.563166 containerd[1697]: time="2025-11-06T00:25:05.563115711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:05.565493 containerd[1697]: time="2025-11-06T00:25:05.565448111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:05.565603 containerd[1697]: time="2025-11-06T00:25:05.565454549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:05.565684 kubelet[3155]: E1106 00:25:05.565640 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:05.565727 kubelet[3155]: E1106 00:25:05.565701 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:05.565875 kubelet[3155]: E1106 00:25:05.565847 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e82e16cbc54941bcac30186b05aa4902,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:05.567787 containerd[1697]: time="2025-11-06T00:25:05.567608385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:05.811603 containerd[1697]: time="2025-11-06T00:25:05.811562549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:05.814092 containerd[1697]: time="2025-11-06T00:25:05.814013445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:05.814092 containerd[1697]: time="2025-11-06T00:25:05.814077035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:05.814449 kubelet[3155]: E1106 00:25:05.814203 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:05.814449 kubelet[3155]: E1106 00:25:05.814244 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:05.814799 kubelet[3155]: E1106 00:25:05.814392 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:05.815623 kubelet[3155]: E1106 00:25:05.815566 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:25:06.487608 kubelet[3155]: I1106 00:25:06.487552 3155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6ee567-f5e8-4c03-b991-477d0718cfe0" path="/var/lib/kubelet/pods/8c6ee567-f5e8-4c03-b991-477d0718cfe0/volumes" Nov 6 00:25:06.633622 kubelet[3155]: E1106 00:25:06.633129 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:25:06.967471 systemd-networkd[1342]: cali071bb7fb71b: Gained IPv6LL Nov 6 00:25:07.483170 containerd[1697]: time="2025-11-06T00:25:07.483135274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-pmjl7,Uid:b7b98906-e4ec-4320-b242-7b5d2e64e1f1,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:25:07.574522 systemd-networkd[1342]: cali1e8356bd995: Link UP Nov 6 00:25:07.574653 systemd-networkd[1342]: cali1e8356bd995: Gained carrier Nov 6 00:25:07.590282 containerd[1697]: 2025-11-06 00:25:07.507 [INFO][4504] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:07.590282 containerd[1697]: 2025-11-06 00:25:07.514 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0 calico-apiserver-6864945c74- calico-apiserver b7b98906-e4ec-4320-b242-7b5d2e64e1f1 829 0 2025-11-06 00:24:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6864945c74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e calico-apiserver-6864945c74-pmjl7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e8356bd995 [] [] }} ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-" Nov 6 00:25:07.590282 containerd[1697]: 2025-11-06 00:25:07.515 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.590282 containerd[1697]: 2025-11-06 00:25:07.537 [INFO][4517] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" HandleID="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.537 [INFO][4517] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" HandleID="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"calico-apiserver-6864945c74-pmjl7", "timestamp":"2025-11-06 00:25:07.53777686 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.537 [INFO][4517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.537 [INFO][4517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.538 [INFO][4517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.541 [INFO][4517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.544 [INFO][4517] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.547 [INFO][4517] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.548 [INFO][4517] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590544 containerd[1697]: 2025-11-06 00:25:07.549 [INFO][4517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.549 [INFO][4517] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.550 [INFO][4517] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386 Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.554 [INFO][4517] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.567 [INFO][4517] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.194/26] block=192.168.11.192/26 handle="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.567 [INFO][4517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.194/26] handle="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.567 [INFO][4517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:07.590752 containerd[1697]: 2025-11-06 00:25:07.567 [INFO][4517] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.194/26] IPv6=[] ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" HandleID="k8s-pod-network.e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.590895 containerd[1697]: 2025-11-06 00:25:07.571 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0", GenerateName:"calico-apiserver-6864945c74-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b98906-e4ec-4320-b242-7b5d2e64e1f1", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864945c74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"calico-apiserver-6864945c74-pmjl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8356bd995", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:07.590959 containerd[1697]: 2025-11-06 00:25:07.571 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.194/32] ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.590959 containerd[1697]: 2025-11-06 00:25:07.571 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e8356bd995 ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.590959 containerd[1697]: 2025-11-06 00:25:07.573 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.592992 containerd[1697]: 2025-11-06 00:25:07.573 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0", GenerateName:"calico-apiserver-6864945c74-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7b98906-e4ec-4320-b242-7b5d2e64e1f1", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864945c74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386", Pod:"calico-apiserver-6864945c74-pmjl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8356bd995", MAC:"92:43:2e:f0:b4:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:07.593075 containerd[1697]: 2025-11-06 00:25:07.587 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-pmjl7" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--pmjl7-eth0" Nov 6 00:25:07.650223 containerd[1697]: time="2025-11-06T00:25:07.649324566Z" level=info msg="connecting to shim e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386" address="unix:///run/containerd/s/ecdd91642f16367ca6d0f0f64fa2ab00ad3df933403f4b3e3b0c68196ceb18d0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:07.688568 systemd[1]: Started cri-containerd-e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386.scope - libcontainer container e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386. Nov 6 00:25:07.734034 containerd[1697]: time="2025-11-06T00:25:07.733969461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-pmjl7,Uid:b7b98906-e4ec-4320-b242-7b5d2e64e1f1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e521528a1efba72dc6feebeb9132fd449bea9d008d9016876fc4c96a769b0386\"" Nov 6 00:25:07.735777 containerd[1697]: time="2025-11-06T00:25:07.735758604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:07.981899 containerd[1697]: time="2025-11-06T00:25:07.981870635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:07.984212 containerd[1697]: time="2025-11-06T00:25:07.984149837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:07.984212 containerd[1697]: time="2025-11-06T00:25:07.984202874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:07.984529 kubelet[3155]: E1106 00:25:07.984488 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:07.984763 kubelet[3155]: E1106 00:25:07.984538 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:07.984763 kubelet[3155]: E1106 00:25:07.984666 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlkrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:07.986030 kubelet[3155]: E1106 00:25:07.985978 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:08.483367 containerd[1697]: time="2025-11-06T00:25:08.483209102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5679b55d7c-7hshc,Uid:463dc82a-762a-4a15-83ea-246d11d83d8a,Namespace:calico-system,Attempt:0,}" Nov 6 00:25:08.484207 containerd[1697]: time="2025-11-06T00:25:08.483467125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqndk,Uid:69a504f7-574b-4158-b560-e00f616f3ecf,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:08.484207 containerd[1697]: time="2025-11-06T00:25:08.483994162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wmdqb,Uid:a9d7d62c-06ef-4717-9bd7-eae5448191dc,Namespace:calico-system,Attempt:0,}" Nov 6 00:25:08.638280 kubelet[3155]: E1106 00:25:08.638241 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:08.746788 systemd-networkd[1342]: calid13f8ab1d92: Link UP Nov 6 00:25:08.746973 systemd-networkd[1342]: calid13f8ab1d92: Gained carrier Nov 6 00:25:08.789867 containerd[1697]: 2025-11-06 00:25:08.552 [INFO][4617] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:08.789867 containerd[1697]: 2025-11-06 00:25:08.568 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0 csi-node-driver- calico-system a9d7d62c-06ef-4717-9bd7-eae5448191dc 726 0 2025-11-06 00:24:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e csi-node-driver-wmdqb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid13f8ab1d92 [] [] }} ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-" Nov 6 00:25:08.789867 containerd[1697]: 2025-11-06 00:25:08.568 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.789867 containerd[1697]: 2025-11-06 00:25:08.618 [INFO][4635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" HandleID="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.619 [INFO][4635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" HandleID="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"csi-node-driver-wmdqb", "timestamp":"2025-11-06 00:25:08.618860444 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.619 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.619 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.619 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.644 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.652 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.668 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.673 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.790280 containerd[1697]: 2025-11-06 00:25:08.683 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.683 [INFO][4635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.698 [INFO][4635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.710 [INFO][4635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.195/26] block=192.168.11.192/26 handle="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.195/26] handle="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:08.791142 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.195/26] IPv6=[] ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" HandleID="k8s-pod-network.3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.791306 containerd[1697]: 2025-11-06 00:25:08.741 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d7d62c-06ef-4717-9bd7-eae5448191dc", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"csi-node-driver-wmdqb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid13f8ab1d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.792243 containerd[1697]: 2025-11-06 00:25:08.741 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.195/32] ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.792243 containerd[1697]: 2025-11-06 00:25:08.742 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid13f8ab1d92 ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.792243 containerd[1697]: 2025-11-06 00:25:08.748 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.792335 containerd[1697]: 2025-11-06 00:25:08.749 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9d7d62c-06ef-4717-9bd7-eae5448191dc", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b", Pod:"csi-node-driver-wmdqb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid13f8ab1d92", MAC:"76:c3:ed:87:a4:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.792469 containerd[1697]: 2025-11-06 00:25:08.788 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" Namespace="calico-system" Pod="csi-node-driver-wmdqb" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-csi--node--driver--wmdqb-eth0" Nov 6 00:25:08.838101 containerd[1697]: time="2025-11-06T00:25:08.838071233Z" level=info msg="connecting to shim 3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b" address="unix:///run/containerd/s/0a0118ed24656dfb8dc00966d8936b479e270b9253a6cfb578235d9269cdd02c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:08.851431 systemd-networkd[1342]: cali90768b7ef97: Link UP Nov 6 00:25:08.857673 systemd-networkd[1342]: cali90768b7ef97: Gained carrier Nov 6 00:25:08.880559 systemd[1]: Started cri-containerd-3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b.scope - libcontainer container 3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b. Nov 6 00:25:08.892833 containerd[1697]: 2025-11-06 00:25:08.565 [INFO][4606] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:08.892833 containerd[1697]: 2025-11-06 00:25:08.594 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0 coredns-674b8bbfcf- kube-system 69a504f7-574b-4158-b560-e00f616f3ecf 823 0 2025-11-06 00:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e coredns-674b8bbfcf-sqndk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90768b7ef97 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-" Nov 6 00:25:08.892833 containerd[1697]: 2025-11-06 00:25:08.594 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.892833 containerd[1697]: 2025-11-06 00:25:08.649 [INFO][4642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" HandleID="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.651 [INFO][4642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" HandleID="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"coredns-674b8bbfcf-sqndk", "timestamp":"2025-11-06 00:25:08.649231707 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.651 [INFO][4642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.740 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.750 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.763 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.797 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.801 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893002 containerd[1697]: 2025-11-06 00:25:08.807 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.807 [INFO][4642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.810 [INFO][4642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86 Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.817 [INFO][4642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.196/26] block=192.168.11.192/26 handle="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.196/26] handle="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:08.893204 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.196/26] IPv6=[] ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" HandleID="k8s-pod-network.da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.843 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69a504f7-574b-4158-b560-e00f616f3ecf", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"coredns-674b8bbfcf-sqndk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90768b7ef97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.843 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.196/32] ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.843 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90768b7ef97 ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.865 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.869 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69a504f7-574b-4158-b560-e00f616f3ecf", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86", Pod:"coredns-674b8bbfcf-sqndk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90768b7ef97", MAC:"ee:45:1a:58:07:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.893338 containerd[1697]: 2025-11-06 00:25:08.889 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" Namespace="kube-system" Pod="coredns-674b8bbfcf-sqndk" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--sqndk-eth0" Nov 6 00:25:08.928564 systemd-networkd[1342]: calie8e593cfdb3: Link UP Nov 6 00:25:08.929453 systemd-networkd[1342]: calie8e593cfdb3: Gained carrier Nov 6 00:25:08.950941 containerd[1697]: time="2025-11-06T00:25:08.950860202Z" level=info msg="connecting to shim da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86" address="unix:///run/containerd/s/0e6f70700f6765e5f4774554fb197c2e06b76b5c4041ab2e166b7ffe489bba32" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.558 [INFO][4596] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.589 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0 calico-kube-controllers-5679b55d7c- calico-system 463dc82a-762a-4a15-83ea-246d11d83d8a 830 0 2025-11-06 00:24:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5679b55d7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e calico-kube-controllers-5679b55d7c-7hshc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8e593cfdb3 [] [] }} ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.589 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.659 [INFO][4648] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" HandleID="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.659 [INFO][4648] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" HandleID="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"calico-kube-controllers-5679b55d7c-7hshc", "timestamp":"2025-11-06 00:25:08.659660098 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.660 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.839 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.871 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.888 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.895 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.898 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.901 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.902 [INFO][4648] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.903 [INFO][4648] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.909 [INFO][4648] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.919 [INFO][4648] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.197/26] block=192.168.11.192/26 handle="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.920 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.197/26] handle="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.920 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:08.955450 containerd[1697]: 2025-11-06 00:25:08.920 [INFO][4648] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.197/26] IPv6=[] ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" HandleID="k8s-pod-network.6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.923 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0", GenerateName:"calico-kube-controllers-5679b55d7c-", Namespace:"calico-system", SelfLink:"", UID:"463dc82a-762a-4a15-83ea-246d11d83d8a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5679b55d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"calico-kube-controllers-5679b55d7c-7hshc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8e593cfdb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.923 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.197/32] ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.923 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8e593cfdb3 ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.930 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.931 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0", GenerateName:"calico-kube-controllers-5679b55d7c-", Namespace:"calico-system", SelfLink:"", UID:"463dc82a-762a-4a15-83ea-246d11d83d8a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5679b55d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d", Pod:"calico-kube-controllers-5679b55d7c-7hshc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8e593cfdb3", MAC:"9a:a9:db:08:ba:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:08.956086 containerd[1697]: 2025-11-06 00:25:08.949 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" Namespace="calico-system" Pod="calico-kube-controllers-5679b55d7c-7hshc" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--kube--controllers--5679b55d7c--7hshc-eth0" Nov 6 00:25:09.012512 systemd[1]: Started cri-containerd-da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86.scope - libcontainer container da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86. Nov 6 00:25:09.021272 containerd[1697]: time="2025-11-06T00:25:09.021076272Z" level=info msg="connecting to shim 6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d" address="unix:///run/containerd/s/be6358e3440a4a53081cae49616861f963360c60261a0078603408a006552049" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:09.023607 containerd[1697]: time="2025-11-06T00:25:09.023572839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wmdqb,Uid:a9d7d62c-06ef-4717-9bd7-eae5448191dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3523765fb99309263da271ff57bfcc158dff8f357b497f4722fc09d12f91bf4b\"" Nov 6 00:25:09.025947 containerd[1697]: time="2025-11-06T00:25:09.025922681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:25:09.050506 systemd[1]: Started cri-containerd-6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d.scope - libcontainer container 6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d. Nov 6 00:25:09.087355 containerd[1697]: time="2025-11-06T00:25:09.087114117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqndk,Uid:69a504f7-574b-4158-b560-e00f616f3ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86\"" Nov 6 00:25:09.099030 containerd[1697]: time="2025-11-06T00:25:09.098909889Z" level=info msg="CreateContainer within sandbox \"da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:25:09.116071 containerd[1697]: time="2025-11-06T00:25:09.116012691Z" level=info msg="Container 63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:09.129522 containerd[1697]: time="2025-11-06T00:25:09.129479864Z" level=info msg="CreateContainer within sandbox \"da2cbe8d019296b833cd41896240dc8a593d9028293cdb19723a2ea945075b86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325\"" Nov 6 00:25:09.130489 containerd[1697]: time="2025-11-06T00:25:09.130458297Z" level=info msg="StartContainer for \"63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325\"" Nov 6 00:25:09.131208 containerd[1697]: time="2025-11-06T00:25:09.131157038Z" level=info msg="connecting to shim 63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325" address="unix:///run/containerd/s/0e6f70700f6765e5f4774554fb197c2e06b76b5c4041ab2e166b7ffe489bba32" protocol=ttrpc version=3 Nov 6 00:25:09.133472 kubelet[3155]: I1106 00:25:09.133337 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:25:09.139846 containerd[1697]: time="2025-11-06T00:25:09.139727683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5679b55d7c-7hshc,Uid:463dc82a-762a-4a15-83ea-246d11d83d8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bd638833a2d0efab311331d64cc652819ad2e0d9bc8660228ea73faa6e8146d\"" Nov 6 00:25:09.143534 systemd-networkd[1342]: cali1e8356bd995: Gained IPv6LL Nov 6 00:25:09.156522 systemd[1]: Started cri-containerd-63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325.scope - libcontainer container 63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325. Nov 6 00:25:09.197052 containerd[1697]: time="2025-11-06T00:25:09.196987761Z" level=info msg="StartContainer for \"63c65608d5e82c56b119b38c9c02d478601d33b2af9a82f7b85e1feda5cb8325\" returns successfully" Nov 6 00:25:09.281483 containerd[1697]: time="2025-11-06T00:25:09.281407780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:09.284552 containerd[1697]: time="2025-11-06T00:25:09.284464848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:25:09.284552 containerd[1697]: time="2025-11-06T00:25:09.284514249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:25:09.284709 kubelet[3155]: E1106 00:25:09.284677 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:09.284754 kubelet[3155]: E1106 00:25:09.284724 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:09.285212 kubelet[3155]: E1106 00:25:09.285123 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:09.285651 containerd[1697]: time="2025-11-06T00:25:09.285552126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:25:09.483664 containerd[1697]: time="2025-11-06T00:25:09.483635135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls9gq,Uid:7fa773ed-071b-40cb-8a90-29ff43899047,Namespace:calico-system,Attempt:0,}" Nov 6 00:25:09.532530 containerd[1697]: time="2025-11-06T00:25:09.532459826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:09.537136 containerd[1697]: time="2025-11-06T00:25:09.536481204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:25:09.537324 containerd[1697]: time="2025-11-06T00:25:09.536517135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:09.537548 kubelet[3155]: E1106 00:25:09.537478 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:09.537548 kubelet[3155]: E1106 00:25:09.537532 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:09.538803 containerd[1697]: time="2025-11-06T00:25:09.537835702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:25:09.538871 kubelet[3155]: E1106 00:25:09.537876 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tk6pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:09.539069 kubelet[3155]: E1106 00:25:09.539042 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:09.615939 systemd-networkd[1342]: cali8f784985080: Link UP Nov 6 00:25:09.617383 systemd-networkd[1342]: cali8f784985080: Gained carrier Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.528 [INFO][4871] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.551 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0 goldmane-666569f655- calico-system 7fa773ed-071b-40cb-8a90-29ff43899047 828 0 2025-11-06 00:24:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e goldmane-666569f655-ls9gq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8f784985080 [] [] }} ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.551 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.575 [INFO][4885] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" HandleID="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.575 [INFO][4885] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" HandleID="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"goldmane-666569f655-ls9gq", "timestamp":"2025-11-06 00:25:09.575057525 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.575 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.575 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.575 [INFO][4885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.581 [INFO][4885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.584 [INFO][4885] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.590 [INFO][4885] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.592 [INFO][4885] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.594 [INFO][4885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.594 [INFO][4885] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.595 [INFO][4885] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410 Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.600 [INFO][4885] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.611 [INFO][4885] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.198/26] block=192.168.11.192/26 handle="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.611 [INFO][4885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.198/26] handle="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.611 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:09.635879 containerd[1697]: 2025-11-06 00:25:09.611 [INFO][4885] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.198/26] IPv6=[] ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" HandleID="k8s-pod-network.103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.613 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7fa773ed-071b-40cb-8a90-29ff43899047", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"goldmane-666569f655-ls9gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f784985080", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.613 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.198/32] ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.613 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f784985080 ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.616 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.617 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7fa773ed-071b-40cb-8a90-29ff43899047", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410", Pod:"goldmane-666569f655-ls9gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f784985080", MAC:"62:fe:cb:f5:2e:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:09.636849 containerd[1697]: 2025-11-06 00:25:09.633 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" Namespace="calico-system" Pod="goldmane-666569f655-ls9gq" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-goldmane--666569f655--ls9gq-eth0" Nov 6 00:25:09.648463 kubelet[3155]: E1106 00:25:09.647240 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:09.649951 kubelet[3155]: E1106 00:25:09.649927 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:09.663433 kubelet[3155]: I1106 00:25:09.663385 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sqndk" podStartSLOduration=46.663370369 podStartE2EDuration="46.663370369s" podCreationTimestamp="2025-11-06 00:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:09.66293733 +0000 UTC m=+53.258451830" watchObservedRunningTime="2025-11-06 00:25:09.663370369 +0000 UTC m=+53.258884929" Nov 6 00:25:09.700365 containerd[1697]: time="2025-11-06T00:25:09.698483656Z" level=info msg="connecting to shim 103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410" address="unix:///run/containerd/s/cb75c84cb116d753eb2c1001f00c0528e0a001601a8531644caa2eed5d7654fc" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:09.737571 systemd[1]: Started cri-containerd-103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410.scope - libcontainer container 103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410. Nov 6 00:25:09.789871 containerd[1697]: time="2025-11-06T00:25:09.789248495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ls9gq,Uid:7fa773ed-071b-40cb-8a90-29ff43899047,Namespace:calico-system,Attempt:0,} returns sandbox id \"103b012812d4cc3c8678e0f2baaa66ca4cd5c61cfc86dd70893f9caa0adb6410\"" Nov 6 00:25:09.790285 containerd[1697]: time="2025-11-06T00:25:09.790265979Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:09.793470 containerd[1697]: time="2025-11-06T00:25:09.793431058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:25:09.793587 containerd[1697]: time="2025-11-06T00:25:09.793483213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:25:09.793730 kubelet[3155]: E1106 00:25:09.793707 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:09.793849 kubelet[3155]: E1106 00:25:09.793787 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:09.794396 kubelet[3155]: E1106 00:25:09.793965 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:09.794489 containerd[1697]: time="2025-11-06T00:25:09.794047778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:09.795662 kubelet[3155]: E1106 00:25:09.795631 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:25:09.996810 systemd-networkd[1342]: vxlan.calico: Link UP Nov 6 00:25:09.996916 systemd-networkd[1342]: vxlan.calico: Gained carrier Nov 6 00:25:10.031444 containerd[1697]: time="2025-11-06T00:25:10.031402543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:10.034445 containerd[1697]: time="2025-11-06T00:25:10.034414138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:10.035043 containerd[1697]: time="2025-11-06T00:25:10.034495118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:10.035111 kubelet[3155]: E1106 00:25:10.034639 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:10.035111 kubelet[3155]: E1106 00:25:10.034687 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:10.035111 kubelet[3155]: E1106 00:25:10.034834 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbqdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:10.036123 kubelet[3155]: E1106 00:25:10.036090 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:10.039472 systemd-networkd[1342]: calie8e593cfdb3: Gained IPv6LL Nov 6 00:25:10.039653 systemd-networkd[1342]: cali90768b7ef97: Gained IPv6LL Nov 6 00:25:10.359467 systemd-networkd[1342]: calid13f8ab1d92: Gained IPv6LL Nov 6 00:25:10.483411 containerd[1697]: time="2025-11-06T00:25:10.483370218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-qs8mf,Uid:474a2a28-e33d-439d-b39a-dfe9428b38e0,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:25:10.583724 systemd-networkd[1342]: cali6790a75e159: Link UP Nov 6 00:25:10.586629 systemd-networkd[1342]: cali6790a75e159: Gained carrier Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.518 [INFO][5063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0 calico-apiserver-6864945c74- calico-apiserver 474a2a28-e33d-439d-b39a-dfe9428b38e0 827 0 2025-11-06 00:24:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6864945c74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e calico-apiserver-6864945c74-qs8mf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6790a75e159 [] [] }} ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.518 [INFO][5063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.540 [INFO][5077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" HandleID="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.540 [INFO][5077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" HandleID="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"calico-apiserver-6864945c74-qs8mf", "timestamp":"2025-11-06 00:25:10.540840243 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.540 [INFO][5077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.541 [INFO][5077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.541 [INFO][5077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.546 [INFO][5077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.551 [INFO][5077] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.554 [INFO][5077] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.556 [INFO][5077] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.557 [INFO][5077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.557 [INFO][5077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.558 [INFO][5077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0 Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.564 [INFO][5077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.576 [INFO][5077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.199/26] block=192.168.11.192/26 handle="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.576 [INFO][5077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.199/26] handle="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.576 [INFO][5077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:10.614447 containerd[1697]: 2025-11-06 00:25:10.576 [INFO][5077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.199/26] IPv6=[] ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" HandleID="k8s-pod-network.419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.578 [INFO][5063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0", GenerateName:"calico-apiserver-6864945c74-", Namespace:"calico-apiserver", SelfLink:"", UID:"474a2a28-e33d-439d-b39a-dfe9428b38e0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864945c74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"calico-apiserver-6864945c74-qs8mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6790a75e159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.579 [INFO][5063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.199/32] ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.579 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6790a75e159 ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.584 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.586 [INFO][5063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0", GenerateName:"calico-apiserver-6864945c74-", Namespace:"calico-apiserver", SelfLink:"", UID:"474a2a28-e33d-439d-b39a-dfe9428b38e0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864945c74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0", Pod:"calico-apiserver-6864945c74-qs8mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6790a75e159", MAC:"8e:1f:aa:10:13:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:10.615199 containerd[1697]: 2025-11-06 00:25:10.611 [INFO][5063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" Namespace="calico-apiserver" Pod="calico-apiserver-6864945c74-qs8mf" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-calico--apiserver--6864945c74--qs8mf-eth0" Nov 6 00:25:10.654926 kubelet[3155]: E1106 00:25:10.654870 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:10.661165 kubelet[3155]: E1106 00:25:10.658488 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:10.661165 kubelet[3155]: E1106 00:25:10.658578 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:25:10.671162 containerd[1697]: time="2025-11-06T00:25:10.671099854Z" level=info msg="connecting to shim 419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0" address="unix:///run/containerd/s/bb21cdcf7d51485bb4988708371539acc9c5d03bd958fa0556a66d133da81338" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:10.700494 systemd[1]: Started cri-containerd-419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0.scope - libcontainer container 419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0. Nov 6 00:25:10.748197 containerd[1697]: time="2025-11-06T00:25:10.748153150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864945c74-qs8mf,Uid:474a2a28-e33d-439d-b39a-dfe9428b38e0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"419e278060e2426a27733a136c9c67bf0e2ce8b711394e8b9aa418d3902987e0\"" Nov 6 00:25:10.749713 containerd[1697]: time="2025-11-06T00:25:10.749683282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:11.021313 containerd[1697]: time="2025-11-06T00:25:11.021150090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:11.023690 containerd[1697]: time="2025-11-06T00:25:11.023631577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:11.023843 containerd[1697]: time="2025-11-06T00:25:11.023636862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:11.023873 kubelet[3155]: E1106 00:25:11.023839 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:11.023927 kubelet[3155]: E1106 00:25:11.023881 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:11.024105 kubelet[3155]: E1106 00:25:11.024062 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:11.025325 kubelet[3155]: E1106 00:25:11.025295 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:11.063488 systemd-networkd[1342]: cali8f784985080: Gained IPv6LL Nov 6 00:25:11.483286 containerd[1697]: time="2025-11-06T00:25:11.483250347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zbf8,Uid:1d840568-04dc-45c8-8e7f-aecf5c0782c2,Namespace:kube-system,Attempt:0,}" Nov 6 00:25:11.571054 systemd-networkd[1342]: calic191967f3fc: Link UP Nov 6 00:25:11.571232 systemd-networkd[1342]: calic191967f3fc: Gained carrier Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.513 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0 coredns-674b8bbfcf- kube-system 1d840568-04dc-45c8-8e7f-aecf5c0782c2 826 0 2025-11-06 00:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-1b1a1d3a2e coredns-674b8bbfcf-5zbf8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic191967f3fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.513 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.539 [INFO][5160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" HandleID="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.539 [INFO][5160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" HandleID="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-1b1a1d3a2e", "pod":"coredns-674b8bbfcf-5zbf8", "timestamp":"2025-11-06 00:25:11.539402617 +0000 UTC"}, Hostname:"ci-4459.1.0-n-1b1a1d3a2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.539 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.539 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.539 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-1b1a1d3a2e' Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.543 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.546 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.549 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.551 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.552 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.553 [INFO][5160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.554 [INFO][5160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960 Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.557 [INFO][5160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.565 [INFO][5160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.200/26] block=192.168.11.192/26 handle="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.565 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.200/26] handle="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" host="ci-4459.1.0-n-1b1a1d3a2e" Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.565 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:25:11.585900 containerd[1697]: 2025-11-06 00:25:11.565 [INFO][5160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.200/26] IPv6=[] ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" HandleID="k8s-pod-network.e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Workload="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.567 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d840568-04dc-45c8-8e7f-aecf5c0782c2", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"", Pod:"coredns-674b8bbfcf-5zbf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic191967f3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.567 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.200/32] ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.567 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic191967f3fc ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.572 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.572 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d840568-04dc-45c8-8e7f-aecf5c0782c2", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-1b1a1d3a2e", ContainerID:"e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960", Pod:"coredns-674b8bbfcf-5zbf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic191967f3fc", MAC:"3e:3c:e5:bb:70:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:25:11.588011 containerd[1697]: 2025-11-06 00:25:11.582 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zbf8" WorkloadEndpoint="ci--4459.1.0--n--1b1a1d3a2e-k8s-coredns--674b8bbfcf--5zbf8-eth0" Nov 6 00:25:11.625178 containerd[1697]: time="2025-11-06T00:25:11.625146799Z" level=info msg="connecting to shim e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960" address="unix:///run/containerd/s/2dd1228fd0f78f3f5deb820f8ac8a69fac67f9ec9629388c3d1cb8801e5ffdd0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:25:11.652466 systemd[1]: Started cri-containerd-e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960.scope - libcontainer container e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960. Nov 6 00:25:11.657689 kubelet[3155]: E1106 00:25:11.657659 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:11.658801 kubelet[3155]: E1106 00:25:11.658764 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:11.705905 containerd[1697]: time="2025-11-06T00:25:11.705872367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zbf8,Uid:1d840568-04dc-45c8-8e7f-aecf5c0782c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960\"" Nov 6 00:25:11.713027 containerd[1697]: time="2025-11-06T00:25:11.713004396Z" level=info msg="CreateContainer within sandbox \"e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:25:11.739369 containerd[1697]: time="2025-11-06T00:25:11.739294103Z" level=info msg="Container 267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:25:11.742266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728563153.mount: Deactivated successfully. Nov 6 00:25:11.749144 containerd[1697]: time="2025-11-06T00:25:11.749121586Z" level=info msg="CreateContainer within sandbox \"e6634493fd5548c3206676aed0767bc625d2fb5809c281dde45c555abdf52960\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a\"" Nov 6 00:25:11.749563 containerd[1697]: time="2025-11-06T00:25:11.749508369Z" level=info msg="StartContainer for \"267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a\"" Nov 6 00:25:11.750454 containerd[1697]: time="2025-11-06T00:25:11.750414995Z" level=info msg="connecting to shim 267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a" address="unix:///run/containerd/s/2dd1228fd0f78f3f5deb820f8ac8a69fac67f9ec9629388c3d1cb8801e5ffdd0" protocol=ttrpc version=3 Nov 6 00:25:11.772464 systemd[1]: Started cri-containerd-267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a.scope - libcontainer container 267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a. Nov 6 00:25:11.796435 containerd[1697]: time="2025-11-06T00:25:11.796415565Z" level=info msg="StartContainer for \"267b5243e8e1cf5fc206371113f427c81b0ae932f3469486fd83db771d9fc07a\" returns successfully" Nov 6 00:25:11.895481 systemd-networkd[1342]: vxlan.calico: Gained IPv6LL Nov 6 00:25:12.023461 systemd-networkd[1342]: cali6790a75e159: Gained IPv6LL Nov 6 00:25:12.664001 kubelet[3155]: E1106 00:25:12.663922 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:12.678253 kubelet[3155]: I1106 00:25:12.678203 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5zbf8" podStartSLOduration=49.678189488 podStartE2EDuration="49.678189488s" podCreationTimestamp="2025-11-06 00:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:25:12.677928424 +0000 UTC m=+56.273442921" watchObservedRunningTime="2025-11-06 00:25:12.678189488 +0000 UTC m=+56.273703989" Nov 6 00:25:12.983564 systemd-networkd[1342]: calic191967f3fc: Gained IPv6LL Nov 6 00:25:21.485362 containerd[1697]: time="2025-11-06T00:25:21.484794983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:21.748076 containerd[1697]: time="2025-11-06T00:25:21.747957004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:21.750485 containerd[1697]: time="2025-11-06T00:25:21.750456339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:21.750485 containerd[1697]: time="2025-11-06T00:25:21.750499221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:21.750750 kubelet[3155]: E1106 00:25:21.750720 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:21.751005 kubelet[3155]: E1106 00:25:21.750758 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:21.751033 kubelet[3155]: E1106 00:25:21.751001 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e82e16cbc54941bcac30186b05aa4902,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:21.753436 containerd[1697]: time="2025-11-06T00:25:21.753410802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:22.005568 containerd[1697]: time="2025-11-06T00:25:22.005416232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:22.007969 containerd[1697]: time="2025-11-06T00:25:22.007939461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:22.008102 containerd[1697]: time="2025-11-06T00:25:22.007993171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:22.008148 kubelet[3155]: E1106 00:25:22.008116 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:22.008191 kubelet[3155]: E1106 00:25:22.008160 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:22.008296 kubelet[3155]: E1106 00:25:22.008260 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:22.009538 kubelet[3155]: E1106 00:25:22.009490 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:25:23.484016 containerd[1697]: time="2025-11-06T00:25:23.483931525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:23.763846 containerd[1697]: time="2025-11-06T00:25:23.763721087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:23.766568 containerd[1697]: time="2025-11-06T00:25:23.766536280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:23.766658 containerd[1697]: time="2025-11-06T00:25:23.766544089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:23.766771 kubelet[3155]: E1106 00:25:23.766737 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:23.767016 kubelet[3155]: E1106 00:25:23.766782 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:23.767751 kubelet[3155]: E1106 00:25:23.767228 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbqdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:23.768416 kubelet[3155]: E1106 00:25:23.768387 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:24.485540 containerd[1697]: time="2025-11-06T00:25:24.485505128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:24.734409 containerd[1697]: time="2025-11-06T00:25:24.734369014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:24.738711 containerd[1697]: time="2025-11-06T00:25:24.738638538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:24.738711 containerd[1697]: time="2025-11-06T00:25:24.738695525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:24.738931 kubelet[3155]: E1106 00:25:24.738791 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:24.738931 kubelet[3155]: E1106 00:25:24.738828 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:24.739113 kubelet[3155]: E1106 00:25:24.739007 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlkrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:24.740115 containerd[1697]: time="2025-11-06T00:25:24.740074554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:25:24.741144 kubelet[3155]: E1106 00:25:24.741113 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:24.977155 containerd[1697]: time="2025-11-06T00:25:24.977109307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:24.980607 containerd[1697]: time="2025-11-06T00:25:24.980583861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:25:24.980684 containerd[1697]: time="2025-11-06T00:25:24.980600388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:25:24.980801 kubelet[3155]: E1106 00:25:24.980764 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:24.981125 kubelet[3155]: E1106 00:25:24.980815 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:24.981125 kubelet[3155]: E1106 00:25:24.980943 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:24.983315 containerd[1697]: time="2025-11-06T00:25:24.983275044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:25:25.250106 containerd[1697]: time="2025-11-06T00:25:25.250056225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:25.253793 containerd[1697]: time="2025-11-06T00:25:25.253756973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:25:25.253846 containerd[1697]: time="2025-11-06T00:25:25.253828411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:25:25.254005 kubelet[3155]: E1106 00:25:25.253953 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:25.254082 kubelet[3155]: E1106 00:25:25.254018 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:25.254460 kubelet[3155]: E1106 00:25:25.254150 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:25.255386 kubelet[3155]: E1106 00:25:25.255318 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:25:25.483506 containerd[1697]: time="2025-11-06T00:25:25.483479491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:25:25.735279 containerd[1697]: time="2025-11-06T00:25:25.735240794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:25.740152 containerd[1697]: time="2025-11-06T00:25:25.740104689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:25:25.740228 containerd[1697]: time="2025-11-06T00:25:25.740107494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:25.740380 kubelet[3155]: E1106 00:25:25.740331 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:25.740444 kubelet[3155]: E1106 00:25:25.740391 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:25.740570 kubelet[3155]: E1106 00:25:25.740531 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tk6pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:25.742574 kubelet[3155]: E1106 00:25:25.742535 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:26.484271 containerd[1697]: time="2025-11-06T00:25:26.484234115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:26.730976 containerd[1697]: time="2025-11-06T00:25:26.730934445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:26.733560 containerd[1697]: time="2025-11-06T00:25:26.733529836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:26.733611 containerd[1697]: time="2025-11-06T00:25:26.733582241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:26.733715 kubelet[3155]: E1106 00:25:26.733687 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:26.734002 kubelet[3155]: E1106 00:25:26.733724 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:26.734002 kubelet[3155]: E1106 00:25:26.733854 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:26.735632 kubelet[3155]: E1106 00:25:26.735283 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:34.688329 containerd[1697]: time="2025-11-06T00:25:34.688291378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"6fae3eaa73ae5a602aebbd7e9fae361319e03cbfa0072e8b23bc50c3096a21da\" pid:5301 exited_at:{seconds:1762388734 nanos:687384418}" Nov 6 00:25:36.489359 kubelet[3155]: E1106 00:25:36.489283 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:36.491705 kubelet[3155]: E1106 00:25:36.491544 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:25:37.484177 kubelet[3155]: E1106 00:25:37.484138 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:38.484670 kubelet[3155]: E1106 00:25:38.484621 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:39.483867 kubelet[3155]: E1106 00:25:39.483558 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:40.489436 kubelet[3155]: E1106 00:25:40.489317 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:25:50.486279 containerd[1697]: time="2025-11-06T00:25:50.486233567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:25:50.759882 containerd[1697]: time="2025-11-06T00:25:50.759769043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:50.762288 containerd[1697]: time="2025-11-06T00:25:50.762249125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:25:50.762410 containerd[1697]: time="2025-11-06T00:25:50.762318628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:50.762495 kubelet[3155]: E1106 00:25:50.762437 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:50.762821 kubelet[3155]: E1106 00:25:50.762504 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:50.762821 kubelet[3155]: E1106 00:25:50.762648 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tk6pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:50.764529 kubelet[3155]: E1106 00:25:50.764498 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:25:51.484530 containerd[1697]: time="2025-11-06T00:25:51.484483240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:51.747027 containerd[1697]: time="2025-11-06T00:25:51.746701957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:51.750195 containerd[1697]: time="2025-11-06T00:25:51.750110503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:51.750436 containerd[1697]: time="2025-11-06T00:25:51.750141351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:51.750931 kubelet[3155]: E1106 00:25:51.750896 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:51.751006 kubelet[3155]: E1106 00:25:51.750944 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:51.751547 containerd[1697]: time="2025-11-06T00:25:51.751429290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:51.752231 kubelet[3155]: E1106 00:25:51.751647 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbqdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:51.753715 kubelet[3155]: E1106 00:25:51.753682 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:25:51.994232 containerd[1697]: time="2025-11-06T00:25:51.994098754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:51.996692 containerd[1697]: time="2025-11-06T00:25:51.996632879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:51.996692 containerd[1697]: time="2025-11-06T00:25:51.996676555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:51.996890 kubelet[3155]: E1106 00:25:51.996855 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:51.997202 kubelet[3155]: E1106 00:25:51.996908 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:51.997202 kubelet[3155]: E1106 00:25:51.997045 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e82e16cbc54941bcac30186b05aa4902,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:52.001953 containerd[1697]: time="2025-11-06T00:25:52.001928142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:52.250058 containerd[1697]: time="2025-11-06T00:25:52.249935825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:52.252613 containerd[1697]: time="2025-11-06T00:25:52.252570116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:52.252711 containerd[1697]: time="2025-11-06T00:25:52.252663250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:52.253147 kubelet[3155]: E1106 00:25:52.252868 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:52.253202 kubelet[3155]: E1106 00:25:52.253163 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:52.253671 kubelet[3155]: E1106 00:25:52.253631 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:52.254836 kubelet[3155]: E1106 00:25:52.254803 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:25:52.485207 containerd[1697]: time="2025-11-06T00:25:52.485124800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:52.731792 containerd[1697]: time="2025-11-06T00:25:52.731748563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:52.734455 containerd[1697]: time="2025-11-06T00:25:52.734359397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:52.734455 containerd[1697]: time="2025-11-06T00:25:52.734428786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:52.735088 kubelet[3155]: E1106 00:25:52.734643 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:52.735088 kubelet[3155]: E1106 00:25:52.734685 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:52.735088 kubelet[3155]: E1106 00:25:52.734874 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlkrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:52.735553 containerd[1697]: time="2025-11-06T00:25:52.735531464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:52.736035 kubelet[3155]: E1106 00:25:52.736007 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:25:52.976947 containerd[1697]: time="2025-11-06T00:25:52.976900166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:52.980797 containerd[1697]: time="2025-11-06T00:25:52.980684817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:52.980797 containerd[1697]: time="2025-11-06T00:25:52.980772651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:52.981161 kubelet[3155]: E1106 00:25:52.981077 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:52.981161 kubelet[3155]: E1106 00:25:52.981138 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:52.982196 kubelet[3155]: E1106 00:25:52.982065 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:52.983649 kubelet[3155]: E1106 00:25:52.983450 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:25:54.485808 containerd[1697]: time="2025-11-06T00:25:54.485397246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:25:54.733361 containerd[1697]: time="2025-11-06T00:25:54.732406313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:54.742116 containerd[1697]: time="2025-11-06T00:25:54.741304848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:25:54.742116 containerd[1697]: time="2025-11-06T00:25:54.742072010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:25:54.742305 kubelet[3155]: E1106 00:25:54.742274 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:54.742593 kubelet[3155]: E1106 00:25:54.742319 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:54.742593 kubelet[3155]: E1106 00:25:54.742457 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:54.746190 containerd[1697]: time="2025-11-06T00:25:54.746008167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:25:55.001779 containerd[1697]: time="2025-11-06T00:25:55.001671197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:55.005362 containerd[1697]: time="2025-11-06T00:25:55.004568727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:25:55.005362 containerd[1697]: time="2025-11-06T00:25:55.004659580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:25:55.005490 kubelet[3155]: E1106 00:25:55.004794 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:55.005490 kubelet[3155]: E1106 00:25:55.004838 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:55.005490 kubelet[3155]: E1106 00:25:55.004965 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:55.006448 kubelet[3155]: E1106 00:25:55.006408 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:26:04.486265 kubelet[3155]: E1106 00:26:04.485843 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:26:04.488975 kubelet[3155]: E1106 00:26:04.488932 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:26:04.773777 containerd[1697]: time="2025-11-06T00:26:04.773400738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"a62d52b3587aff6abf615ffe39ce25b3e5309efb3154cc24d7733aebf1d4510d\" pid:5340 exited_at:{seconds:1762388764 nanos:773123034}" Nov 6 00:26:05.484175 kubelet[3155]: E1106 00:26:05.484131 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:26:05.484437 kubelet[3155]: E1106 00:26:05.484415 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:26:07.484011 kubelet[3155]: E1106 00:26:07.483922 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:26:07.485529 kubelet[3155]: E1106 00:26:07.485423 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:26:15.484869 kubelet[3155]: E1106 00:26:15.484596 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:26:16.484221 kubelet[3155]: E1106 00:26:16.483694 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:26:16.485564 kubelet[3155]: E1106 00:26:16.485535 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:26:18.483949 kubelet[3155]: E1106 00:26:18.483408 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:26:20.489626 kubelet[3155]: E1106 00:26:20.489512 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:26:21.484759 kubelet[3155]: E1106 00:26:21.484687 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:26:26.485050 kubelet[3155]: E1106 00:26:26.485002 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:26:28.485309 kubelet[3155]: E1106 00:26:28.484440 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:26:30.484783 kubelet[3155]: E1106 00:26:30.484742 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:26:31.483264 kubelet[3155]: E1106 00:26:31.483190 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:26:31.484235 containerd[1697]: time="2025-11-06T00:26:31.483640616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:26:31.724285 containerd[1697]: time="2025-11-06T00:26:31.724245280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:31.732631 containerd[1697]: time="2025-11-06T00:26:31.732584705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:26:31.732738 containerd[1697]: time="2025-11-06T00:26:31.732686115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:26:31.733684 kubelet[3155]: E1106 00:26:31.733388 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:26:31.733684 kubelet[3155]: E1106 00:26:31.733453 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:26:31.733684 kubelet[3155]: E1106 00:26:31.733615 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tk6pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5679b55d7c-7hshc_calico-system(463dc82a-762a-4a15-83ea-246d11d83d8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:31.734918 kubelet[3155]: E1106 00:26:31.734785 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:26:34.686041 containerd[1697]: time="2025-11-06T00:26:34.685986553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"35f22f30bbea29e343ecd94bdcf2cd34b27e88473781251fe7f2e5d0b0b16948\" pid:5375 exited_at:{seconds:1762388794 nanos:685514743}" Nov 6 00:26:35.484099 containerd[1697]: time="2025-11-06T00:26:35.484056588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:26:35.731071 containerd[1697]: time="2025-11-06T00:26:35.731034805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:35.740076 containerd[1697]: time="2025-11-06T00:26:35.739895674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:26:35.740076 containerd[1697]: time="2025-11-06T00:26:35.739916815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:26:35.740159 kubelet[3155]: E1106 00:26:35.740108 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:26:35.740159 kubelet[3155]: E1106 00:26:35.740153 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:26:35.740499 kubelet[3155]: E1106 00:26:35.740277 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:35.742511 containerd[1697]: time="2025-11-06T00:26:35.742481900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:26:35.992315 containerd[1697]: time="2025-11-06T00:26:35.992172530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:35.994684 containerd[1697]: time="2025-11-06T00:26:35.994599267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:26:35.994684 containerd[1697]: time="2025-11-06T00:26:35.994634048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:26:35.994826 kubelet[3155]: E1106 00:26:35.994795 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:26:35.994879 kubelet[3155]: E1106 00:26:35.994840 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:26:35.995125 kubelet[3155]: E1106 00:26:35.994978 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wmdqb_calico-system(a9d7d62c-06ef-4717-9bd7-eae5448191dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:35.996301 kubelet[3155]: E1106 00:26:35.996283 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:26:40.093992 systemd[1]: Started sshd@7-10.200.8.43:22-10.200.16.10:48568.service - OpenSSH per-connection server daemon (10.200.16.10:48568). Nov 6 00:26:40.484445 containerd[1697]: time="2025-11-06T00:26:40.483996228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:26:40.736963 containerd[1697]: time="2025-11-06T00:26:40.736247095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:40.737066 sshd[5391]: Accepted publickey for core from 10.200.16.10 port 48568 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:40.740267 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.742150 containerd[1697]: time="2025-11-06T00:26:40.742033335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:26:40.742150 containerd[1697]: time="2025-11-06T00:26:40.742122153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:26:40.743029 kubelet[3155]: E1106 00:26:40.742870 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:40.743029 kubelet[3155]: E1106 00:26:40.742971 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:40.743328 kubelet[3155]: E1106 00:26:40.743171 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e82e16cbc54941bcac30186b05aa4902,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:40.745234 containerd[1697]: time="2025-11-06T00:26:40.744648340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:26:40.750309 systemd-logind[1681]: New session 10 of user core. Nov 6 00:26:40.756323 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:26:40.996608 containerd[1697]: time="2025-11-06T00:26:40.996491925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:40.998859 containerd[1697]: time="2025-11-06T00:26:40.998814131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:26:40.998956 containerd[1697]: time="2025-11-06T00:26:40.998820745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:40.999008 kubelet[3155]: E1106 00:26:40.998976 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:26:40.999059 kubelet[3155]: E1106 00:26:40.999020 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:26:40.999732 kubelet[3155]: E1106 00:26:40.999234 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbqdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ls9gq_calico-system(7fa773ed-071b-40cb-8a90-29ff43899047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:40.999904 containerd[1697]: time="2025-11-06T00:26:40.999429854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:26:41.001247 kubelet[3155]: E1106 00:26:41.001212 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:26:41.244806 containerd[1697]: time="2025-11-06T00:26:41.244766678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:41.247625 containerd[1697]: time="2025-11-06T00:26:41.247019576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:26:41.247625 containerd[1697]: time="2025-11-06T00:26:41.247060440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:26:41.247739 kubelet[3155]: E1106 00:26:41.247199 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:41.247739 kubelet[3155]: E1106 00:26:41.247235 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:41.247892 kubelet[3155]: E1106 00:26:41.247852 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxxqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b4f46bb86-cxgxc_calico-system(920cabd1-828e-4d7e-9f68-76150c797ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:41.249373 kubelet[3155]: E1106 00:26:41.249106 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:26:41.256082 sshd[5394]: Connection closed by 10.200.16.10 port 48568 Nov 6 00:26:41.256535 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:41.259706 systemd-logind[1681]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:26:41.260200 systemd[1]: sshd@7-10.200.8.43:22-10.200.16.10:48568.service: Deactivated successfully. Nov 6 00:26:41.261964 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:26:41.263470 systemd-logind[1681]: Removed session 10. Nov 6 00:26:44.486097 containerd[1697]: time="2025-11-06T00:26:44.485859020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:26:44.731726 containerd[1697]: time="2025-11-06T00:26:44.731644100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:44.734356 containerd[1697]: time="2025-11-06T00:26:44.734230947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:26:44.734356 containerd[1697]: time="2025-11-06T00:26:44.734323430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:44.735441 kubelet[3155]: E1106 00:26:44.734641 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:44.735441 kubelet[3155]: E1106 00:26:44.735412 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:44.736291 kubelet[3155]: E1106 00:26:44.736042 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-qs8mf_calico-apiserver(474a2a28-e33d-439d-b39a-dfe9428b38e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:44.737920 kubelet[3155]: E1106 00:26:44.737785 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:26:45.485807 containerd[1697]: time="2025-11-06T00:26:45.485522387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:26:45.737221 containerd[1697]: time="2025-11-06T00:26:45.737026906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:45.740318 containerd[1697]: time="2025-11-06T00:26:45.740223893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:26:45.740318 containerd[1697]: time="2025-11-06T00:26:45.740306224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:45.740598 kubelet[3155]: E1106 00:26:45.740548 3155 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:45.740803 kubelet[3155]: E1106 00:26:45.740613 3155 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:45.740803 kubelet[3155]: E1106 00:26:45.740755 3155 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlkrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6864945c74-pmjl7_calico-apiserver(b7b98906-e4ec-4320-b242-7b5d2e64e1f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:45.742478 kubelet[3155]: E1106 00:26:45.742412 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:26:46.373294 systemd[1]: Started sshd@8-10.200.8.43:22-10.200.16.10:48582.service - OpenSSH per-connection server daemon (10.200.16.10:48582). Nov 6 00:26:46.485154 kubelet[3155]: E1106 00:26:46.484802 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:26:47.017882 sshd[5414]: Accepted publickey for core from 10.200.16.10 port 48582 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:47.020308 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:47.026359 systemd-logind[1681]: New session 11 of user core. Nov 6 00:26:47.031504 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:26:47.484929 kubelet[3155]: E1106 00:26:47.484876 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:26:47.509941 sshd[5417]: Connection closed by 10.200.16.10 port 48582 Nov 6 00:26:47.510696 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:47.518077 systemd[1]: sshd@8-10.200.8.43:22-10.200.16.10:48582.service: Deactivated successfully. Nov 6 00:26:47.521012 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:26:47.522707 systemd-logind[1681]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:26:47.525305 systemd-logind[1681]: Removed session 11. Nov 6 00:26:52.486627 kubelet[3155]: E1106 00:26:52.486523 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:26:52.488413 kubelet[3155]: E1106 00:26:52.487874 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:26:52.620591 systemd[1]: Started sshd@9-10.200.8.43:22-10.200.16.10:38858.service - OpenSSH per-connection server daemon (10.200.16.10:38858). Nov 6 00:26:53.255436 sshd[5444]: Accepted publickey for core from 10.200.16.10 port 38858 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:53.256485 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:53.259978 systemd-logind[1681]: New session 12 of user core. Nov 6 00:26:53.265538 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:26:53.762241 sshd[5449]: Connection closed by 10.200.16.10 port 38858 Nov 6 00:26:53.763952 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:53.767316 systemd[1]: sshd@9-10.200.8.43:22-10.200.16.10:38858.service: Deactivated successfully. Nov 6 00:26:53.769313 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:26:53.771888 systemd-logind[1681]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:26:53.774199 systemd-logind[1681]: Removed session 12. Nov 6 00:26:53.880004 systemd[1]: Started sshd@10-10.200.8.43:22-10.200.16.10:38872.service - OpenSSH per-connection server daemon (10.200.16.10:38872). Nov 6 00:26:54.509136 sshd[5462]: Accepted publickey for core from 10.200.16.10 port 38872 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:54.510836 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:54.517619 systemd-logind[1681]: New session 13 of user core. Nov 6 00:26:54.520649 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:26:55.024039 sshd[5465]: Connection closed by 10.200.16.10 port 38872 Nov 6 00:26:55.026502 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:55.029758 systemd-logind[1681]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:26:55.031939 systemd[1]: sshd@10-10.200.8.43:22-10.200.16.10:38872.service: Deactivated successfully. Nov 6 00:26:55.033757 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:26:55.035160 systemd-logind[1681]: Removed session 13. Nov 6 00:26:55.137644 systemd[1]: Started sshd@11-10.200.8.43:22-10.200.16.10:38882.service - OpenSSH per-connection server daemon (10.200.16.10:38882). Nov 6 00:26:55.783483 sshd[5475]: Accepted publickey for core from 10.200.16.10 port 38882 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:26:55.785203 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:55.790504 systemd-logind[1681]: New session 14 of user core. Nov 6 00:26:55.794504 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:26:56.310614 sshd[5478]: Connection closed by 10.200.16.10 port 38882 Nov 6 00:26:56.310968 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:56.317316 systemd[1]: sshd@11-10.200.8.43:22-10.200.16.10:38882.service: Deactivated successfully. Nov 6 00:26:56.321696 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:26:56.325099 systemd-logind[1681]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:26:56.326599 systemd-logind[1681]: Removed session 14. Nov 6 00:26:56.485936 kubelet[3155]: E1106 00:26:56.485358 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:26:58.485004 kubelet[3155]: E1106 00:26:58.484789 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:26:59.483981 kubelet[3155]: E1106 00:26:59.483932 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:27:00.488179 kubelet[3155]: E1106 00:27:00.488092 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:27:01.429555 systemd[1]: Started sshd@12-10.200.8.43:22-10.200.16.10:50422.service - OpenSSH per-connection server daemon (10.200.16.10:50422). Nov 6 00:27:02.063980 sshd[5490]: Accepted publickey for core from 10.200.16.10 port 50422 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:02.065175 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:02.069514 systemd-logind[1681]: New session 15 of user core. Nov 6 00:27:02.072495 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:27:02.586586 sshd[5493]: Connection closed by 10.200.16.10 port 50422 Nov 6 00:27:02.589516 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:02.594323 systemd[1]: sshd@12-10.200.8.43:22-10.200.16.10:50422.service: Deactivated successfully. Nov 6 00:27:02.599212 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:27:02.600389 systemd-logind[1681]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:27:02.602538 systemd-logind[1681]: Removed session 15. Nov 6 00:27:03.483892 kubelet[3155]: E1106 00:27:03.483836 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:27:04.704308 containerd[1697]: time="2025-11-06T00:27:04.704268810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"a42239c32c9711a50413298e459b0a552051d8832bfb9e628480ee6042b2f327\" pid:5521 exit_status:1 exited_at:{seconds:1762388824 nanos:703202333}" Nov 6 00:27:06.486634 kubelet[3155]: E1106 00:27:06.486532 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:27:07.697935 systemd[1]: Started sshd@13-10.200.8.43:22-10.200.16.10:50428.service - OpenSSH per-connection server daemon (10.200.16.10:50428). Nov 6 00:27:08.324891 sshd[5536]: Accepted publickey for core from 10.200.16.10 port 50428 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:08.326445 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:08.330928 systemd-logind[1681]: New session 16 of user core. Nov 6 00:27:08.337611 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:27:08.864647 sshd[5539]: Connection closed by 10.200.16.10 port 50428 Nov 6 00:27:08.866079 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:08.869228 systemd[1]: sshd@13-10.200.8.43:22-10.200.16.10:50428.service: Deactivated successfully. Nov 6 00:27:08.872883 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:27:08.873974 systemd-logind[1681]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:27:08.875838 systemd-logind[1681]: Removed session 16. Nov 6 00:27:10.484839 kubelet[3155]: E1106 00:27:10.484801 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:27:11.483838 kubelet[3155]: E1106 00:27:11.483783 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:27:13.485277 kubelet[3155]: E1106 00:27:13.485205 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:27:13.485992 kubelet[3155]: E1106 00:27:13.485586 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:27:13.979742 systemd[1]: Started sshd@14-10.200.8.43:22-10.200.16.10:35752.service - OpenSSH per-connection server daemon (10.200.16.10:35752). Nov 6 00:27:14.606684 sshd[5551]: Accepted publickey for core from 10.200.16.10 port 35752 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:14.607749 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:14.611986 systemd-logind[1681]: New session 17 of user core. Nov 6 00:27:14.614485 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:27:15.130818 sshd[5554]: Connection closed by 10.200.16.10 port 35752 Nov 6 00:27:15.131513 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:15.135971 systemd-logind[1681]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:27:15.137704 systemd[1]: sshd@14-10.200.8.43:22-10.200.16.10:35752.service: Deactivated successfully. Nov 6 00:27:15.140521 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:27:15.145010 systemd-logind[1681]: Removed session 17. Nov 6 00:27:15.241539 systemd[1]: Started sshd@15-10.200.8.43:22-10.200.16.10:35760.service - OpenSSH per-connection server daemon (10.200.16.10:35760). Nov 6 00:27:15.873705 sshd[5566]: Accepted publickey for core from 10.200.16.10 port 35760 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:15.874281 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:15.879926 systemd-logind[1681]: New session 18 of user core. Nov 6 00:27:15.885514 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:27:16.438961 sshd[5569]: Connection closed by 10.200.16.10 port 35760 Nov 6 00:27:16.437644 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:16.441387 systemd-logind[1681]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:27:16.443513 systemd[1]: sshd@15-10.200.8.43:22-10.200.16.10:35760.service: Deactivated successfully. Nov 6 00:27:16.445309 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:27:16.446710 systemd-logind[1681]: Removed session 18. Nov 6 00:27:16.551717 systemd[1]: Started sshd@16-10.200.8.43:22-10.200.16.10:35762.service - OpenSSH per-connection server daemon (10.200.16.10:35762). Nov 6 00:27:17.188110 sshd[5581]: Accepted publickey for core from 10.200.16.10 port 35762 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:17.189085 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:17.193051 systemd-logind[1681]: New session 19 of user core. Nov 6 00:27:17.202488 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:27:17.484316 kubelet[3155]: E1106 00:27:17.484222 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:27:18.197366 sshd[5584]: Connection closed by 10.200.16.10 port 35762 Nov 6 00:27:18.198501 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:18.204837 systemd-logind[1681]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:27:18.205023 systemd[1]: sshd@16-10.200.8.43:22-10.200.16.10:35762.service: Deactivated successfully. Nov 6 00:27:18.207085 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:27:18.211391 systemd-logind[1681]: Removed session 19. Nov 6 00:27:18.310891 systemd[1]: Started sshd@17-10.200.8.43:22-10.200.16.10:35768.service - OpenSSH per-connection server daemon (10.200.16.10:35768). Nov 6 00:27:18.947972 sshd[5601]: Accepted publickey for core from 10.200.16.10 port 35768 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:18.950325 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:18.954483 systemd-logind[1681]: New session 20 of user core. Nov 6 00:27:18.957496 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:27:19.536639 sshd[5604]: Connection closed by 10.200.16.10 port 35768 Nov 6 00:27:19.538016 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:19.541357 systemd[1]: sshd@17-10.200.8.43:22-10.200.16.10:35768.service: Deactivated successfully. Nov 6 00:27:19.541643 systemd-logind[1681]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:27:19.543284 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:27:19.545099 systemd-logind[1681]: Removed session 20. Nov 6 00:27:19.651060 systemd[1]: Started sshd@18-10.200.8.43:22-10.200.16.10:35774.service - OpenSSH per-connection server daemon (10.200.16.10:35774). Nov 6 00:27:20.288465 sshd[5614]: Accepted publickey for core from 10.200.16.10 port 35774 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:20.289589 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:20.295451 systemd-logind[1681]: New session 21 of user core. Nov 6 00:27:20.300674 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:27:20.776240 sshd[5617]: Connection closed by 10.200.16.10 port 35774 Nov 6 00:27:20.778286 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:20.782213 systemd-logind[1681]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:27:20.782401 systemd[1]: sshd@18-10.200.8.43:22-10.200.16.10:35774.service: Deactivated successfully. Nov 6 00:27:20.786560 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:27:20.789087 systemd-logind[1681]: Removed session 21. Nov 6 00:27:21.485331 kubelet[3155]: E1106 00:27:21.485288 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:27:24.486599 kubelet[3155]: E1106 00:27:24.486519 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:27:24.487674 kubelet[3155]: E1106 00:27:24.487638 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:27:24.490168 kubelet[3155]: E1106 00:27:24.490142 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:27:25.904793 systemd[1]: Started sshd@19-10.200.8.43:22-10.200.16.10:34850.service - OpenSSH per-connection server daemon (10.200.16.10:34850). Nov 6 00:27:26.486369 kubelet[3155]: E1106 00:27:26.485870 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:27:26.531545 sshd[5633]: Accepted publickey for core from 10.200.16.10 port 34850 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:26.533734 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:26.541230 systemd-logind[1681]: New session 22 of user core. Nov 6 00:27:26.547607 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:27:27.023390 sshd[5636]: Connection closed by 10.200.16.10 port 34850 Nov 6 00:27:27.024697 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:27.029403 systemd-logind[1681]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:27:27.030257 systemd[1]: sshd@19-10.200.8.43:22-10.200.16.10:34850.service: Deactivated successfully. Nov 6 00:27:27.032138 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:27:27.037808 systemd-logind[1681]: Removed session 22. Nov 6 00:27:28.483183 kubelet[3155]: E1106 00:27:28.483041 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:27:32.136607 systemd[1]: Started sshd@20-10.200.8.43:22-10.200.16.10:39648.service - OpenSSH per-connection server daemon (10.200.16.10:39648). Nov 6 00:27:32.952189 sshd[5648]: Accepted publickey for core from 10.200.16.10 port 39648 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:32.953203 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:32.957651 systemd-logind[1681]: New session 23 of user core. Nov 6 00:27:32.963643 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:27:33.445317 sshd[5651]: Connection closed by 10.200.16.10 port 39648 Nov 6 00:27:33.445836 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:33.449097 systemd[1]: sshd@20-10.200.8.43:22-10.200.16.10:39648.service: Deactivated successfully. Nov 6 00:27:33.450878 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:27:33.451665 systemd-logind[1681]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:27:33.453012 systemd-logind[1681]: Removed session 23. Nov 6 00:27:34.703704 containerd[1697]: time="2025-11-06T00:27:34.703633753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5387dbad5d10454374deda1026fc904f58f86e2d808fd01ae616df423401bad7\" id:\"b247ded8bb1e57fc70a698e405de54b02baf06d1a24701b0228877f585c59ed1\" pid:5675 exited_at:{seconds:1762388854 nanos:703293161}" Nov 6 00:27:36.483970 kubelet[3155]: E1106 00:27:36.483861 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-pmjl7" podUID="b7b98906-e4ec-4320-b242-7b5d2e64e1f1" Nov 6 00:27:36.484881 kubelet[3155]: E1106 00:27:36.484845 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b4f46bb86-cxgxc" podUID="920cabd1-828e-4d7e-9f68-76150c797ff9" Nov 6 00:27:37.485176 kubelet[3155]: E1106 00:27:37.485128 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wmdqb" podUID="a9d7d62c-06ef-4717-9bd7-eae5448191dc" Nov 6 00:27:37.485664 kubelet[3155]: E1106 00:27:37.485239 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5679b55d7c-7hshc" podUID="463dc82a-762a-4a15-83ea-246d11d83d8a" Nov 6 00:27:38.484038 kubelet[3155]: E1106 00:27:38.483969 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864945c74-qs8mf" podUID="474a2a28-e33d-439d-b39a-dfe9428b38e0" Nov 6 00:27:38.557084 systemd[1]: Started sshd@21-10.200.8.43:22-10.200.16.10:39664.service - OpenSSH per-connection server daemon (10.200.16.10:39664). Nov 6 00:27:39.196366 sshd[5687]: Accepted publickey for core from 10.200.16.10 port 39664 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:39.198900 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:39.203932 systemd-logind[1681]: New session 24 of user core. Nov 6 00:27:39.210204 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:27:39.483276 kubelet[3155]: E1106 00:27:39.482983 3155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ls9gq" podUID="7fa773ed-071b-40cb-8a90-29ff43899047" Nov 6 00:27:39.691216 sshd[5690]: Connection closed by 10.200.16.10 port 39664 Nov 6 00:27:39.693332 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:39.696180 systemd[1]: sshd@21-10.200.8.43:22-10.200.16.10:39664.service: Deactivated successfully. Nov 6 00:27:39.698009 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:27:39.699537 systemd-logind[1681]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:27:39.700276 systemd-logind[1681]: Removed session 24. Nov 6 00:27:44.807576 systemd[1]: Started sshd@22-10.200.8.43:22-10.200.16.10:48198.service - OpenSSH per-connection server daemon (10.200.16.10:48198). Nov 6 00:27:45.453368 sshd[5702]: Accepted publickey for core from 10.200.16.10 port 48198 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:27:45.453881 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:27:45.458029 systemd-logind[1681]: New session 25 of user core. Nov 6 00:27:45.463625 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:27:45.981215 sshd[5705]: Connection closed by 10.200.16.10 port 48198 Nov 6 00:27:45.981701 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Nov 6 00:27:45.984931 systemd[1]: sshd@22-10.200.8.43:22-10.200.16.10:48198.service: Deactivated successfully. Nov 6 00:27:45.986672 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:27:45.987332 systemd-logind[1681]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:27:45.988449 systemd-logind[1681]: Removed session 25.