Nov 24 00:16:50.965985 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:16:50.966013 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:16:50.966027 kernel: BIOS-provided physical RAM map: Nov 24 00:16:50.966035 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:16:50.966042 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 24 00:16:50.966049 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 24 00:16:50.966058 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 24 00:16:50.966065 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 24 00:16:50.966071 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 24 00:16:50.966080 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 24 00:16:50.966087 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 24 00:16:50.966094 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 24 00:16:50.966100 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 24 00:16:50.966107 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 24 00:16:50.966116 kernel: NX (Execute Disable) protection: active Nov 24 00:16:50.966126 kernel: APIC: Static calls initialized Nov 24 00:16:50.966134 kernel: efi: EFI v2.7 by Microsoft Nov 24 00:16:50.966142 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Nov 24 00:16:50.966149 kernel: random: crng init done Nov 24 00:16:50.966157 kernel: secureboot: Secure boot disabled Nov 24 00:16:50.966165 kernel: SMBIOS 3.1.0 present. Nov 24 00:16:50.966172 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Nov 24 00:16:50.966180 kernel: DMI: Memory slots populated: 2/2 Nov 24 00:16:50.966187 kernel: Hypervisor detected: Microsoft Hyper-V Nov 24 00:16:50.966194 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 24 00:16:50.966202 kernel: Hyper-V: Nested features: 0x3e0101 Nov 24 00:16:50.966212 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 24 00:16:50.966219 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 24 00:16:50.966227 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:16:50.966235 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:16:50.966243 kernel: tsc: Detected 2299.999 MHz processor Nov 24 00:16:50.966250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:16:50.966259 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:16:50.966266 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 24 00:16:50.966274 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:16:50.966282 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:16:50.966292 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 24 00:16:50.966300 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 24 00:16:50.966308 kernel: Using GB pages for direct mapping Nov 24 00:16:50.966316 kernel: ACPI: Early table checksum verification disabled Nov 24 00:16:50.966353 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 24 00:16:50.966362 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966371 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966379 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 24 00:16:50.966388 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 24 00:16:50.966396 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966405 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966413 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966421 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:16:50.966431 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:16:50.966439 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:16:50.966447 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 24 00:16:50.966455 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Nov 24 00:16:50.966464 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 24 00:16:50.966472 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 24 00:16:50.966481 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 24 00:16:50.966490 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 24 00:16:50.966498 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 24 00:16:50.966508 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 24 00:16:50.966517 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 24 00:16:50.966525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 24 00:16:50.966534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 24 00:16:50.966542 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 24 00:16:50.966550 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 24 00:16:50.966559 kernel: Zone ranges: Nov 24 00:16:50.966567 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:16:50.966576 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 24 00:16:50.966586 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:16:50.966595 kernel: Device empty Nov 24 00:16:50.966603 kernel: Movable zone start for each node Nov 24 00:16:50.966611 kernel: Early memory node ranges Nov 24 00:16:50.966619 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:16:50.966627 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 24 00:16:50.966635 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 24 00:16:50.966644 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 24 00:16:50.966652 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:16:50.966662 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 24 00:16:50.966670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:16:50.966678 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:16:50.966687 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 24 00:16:50.966695 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 24 00:16:50.966702 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 24 00:16:50.966710 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 24 00:16:50.966719 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:16:50.966727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:16:50.966738 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:16:50.966746 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 24 00:16:50.966755 kernel: TSC deadline timer available Nov 24 00:16:50.966763 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:16:50.966770 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:16:50.966778 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:16:50.966786 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:16:50.966794 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:16:50.966854 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:16:50.966862 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:16:50.966871 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 24 00:16:50.966878 kernel: Booting paravirtualized kernel on Hyper-V Nov 24 00:16:50.966886 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:16:50.966895 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:16:50.966918 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:16:50.966927 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:16:50.966936 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:16:50.966945 kernel: Hyper-V: PV spinlocks enabled Nov 24 00:16:50.966954 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:16:50.966962 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:16:50.966974 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 24 00:16:50.966981 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:16:50.966988 kernel: Fallback order for Node 0: 0 Nov 24 00:16:50.966995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 24 00:16:50.967001 kernel: Policy zone: Normal Nov 24 00:16:50.967009 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:16:50.967017 kernel: software IO TLB: area num 2. Nov 24 00:16:50.967027 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:16:50.967035 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:16:50.967043 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:16:50.967052 kernel: Dynamic Preempt: voluntary Nov 24 00:16:50.967060 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:16:50.967069 kernel: rcu: RCU event tracing is enabled. Nov 24 00:16:50.967084 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:16:50.967095 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:16:50.967104 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:16:50.967112 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:16:50.967122 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:16:50.967132 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:16:50.967140 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:16:50.967149 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:16:50.967158 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:16:50.967167 kernel: Using NULL legacy PIC Nov 24 00:16:50.967178 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 24 00:16:50.967186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:16:50.967194 kernel: Console: colour dummy device 80x25 Nov 24 00:16:50.967202 kernel: printk: legacy console [tty1] enabled Nov 24 00:16:50.967211 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:16:50.967220 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 24 00:16:50.967228 kernel: ACPI: Core revision 20240827 Nov 24 00:16:50.967237 kernel: Failed to register legacy timer interrupt Nov 24 00:16:50.967246 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:16:50.967256 kernel: x2apic enabled Nov 24 00:16:50.967265 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:16:50.967274 kernel: Hyper-V: Host Build 10.0.26100.1421-1-0 Nov 24 00:16:50.967283 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 24 00:16:50.967293 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 24 00:16:50.967302 kernel: Hyper-V: Using IPI hypercalls Nov 24 00:16:50.967311 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 24 00:16:50.967320 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 24 00:16:50.967330 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 24 00:16:50.967341 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 24 00:16:50.967351 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 24 00:16:50.967359 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 24 00:16:50.967368 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 24 00:16:50.967376 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 24 00:16:50.967383 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:16:50.967392 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 00:16:50.967401 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 00:16:50.967409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:16:50.967419 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:16:50.967428 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:16:50.967437 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:16:50.967444 kernel: RETBleed: Vulnerable Nov 24 00:16:50.967453 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:16:50.967461 kernel: active return thunk: its_return_thunk Nov 24 00:16:50.967470 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:16:50.967479 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:16:50.967488 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:16:50.967496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:16:50.967503 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:16:50.967514 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:16:50.967523 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:16:50.967531 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 24 00:16:50.967539 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 24 00:16:50.967548 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 24 00:16:50.967556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:16:50.967564 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 24 00:16:50.967573 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 24 00:16:50.967581 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 24 00:16:50.967589 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 24 00:16:50.967598 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 24 00:16:50.967608 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 24 00:16:50.967617 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 24 00:16:50.967626 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:16:50.967634 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:16:50.967643 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:16:50.967652 kernel: landlock: Up and running. Nov 24 00:16:50.967660 kernel: SELinux: Initializing. Nov 24 00:16:50.967669 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:16:50.967678 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:16:50.967687 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 24 00:16:50.967696 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 24 00:16:50.967705 kernel: signal: max sigframe size: 11952 Nov 24 00:16:50.967716 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:16:50.967725 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:16:50.967735 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:16:50.967744 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:16:50.967753 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:16:50.967762 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:16:50.967771 kernel: .... node #0, CPUs: #1 Nov 24 00:16:50.967780 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:16:50.967789 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 24 00:16:50.967800 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Nov 24 00:16:50.967809 kernel: devtmpfs: initialized Nov 24 00:16:50.967819 kernel: x86/mm: Memory block size: 128MB Nov 24 00:16:50.967827 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 24 00:16:50.967836 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:16:50.967846 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:16:50.967855 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:16:50.967864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:16:50.967873 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:16:50.967884 kernel: audit: type=2000 audit(1763943407.121:1): state=initialized audit_enabled=0 res=1 Nov 24 00:16:50.967893 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:16:50.967901 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:16:50.968957 kernel: cpuidle: using governor menu Nov 24 00:16:50.968970 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:16:50.968979 kernel: dca service started, version 1.12.1 Nov 24 00:16:50.968989 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 24 00:16:50.968999 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 24 00:16:50.969011 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:16:50.969019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:16:50.969029 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:16:50.969038 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:16:50.969047 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:16:50.969056 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:16:50.969066 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:16:50.969074 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:16:50.969083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:16:50.969093 kernel: ACPI: Interpreter enabled Nov 24 00:16:50.969102 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:16:50.969111 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:16:50.969120 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:16:50.969129 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 24 00:16:50.969139 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 24 00:16:50.969147 kernel: iommu: Default domain type: Translated Nov 24 00:16:50.969156 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:16:50.969165 kernel: efivars: Registered efivars operations Nov 24 00:16:50.969175 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:16:50.969184 kernel: PCI: System does not support PCI Nov 24 00:16:50.969193 kernel: vgaarb: loaded Nov 24 00:16:50.969202 kernel: clocksource: Switched to clocksource tsc-early Nov 24 00:16:50.969211 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:16:50.969220 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:16:50.969229 kernel: pnp: PnP ACPI init Nov 24 00:16:50.969238 kernel: pnp: PnP ACPI: found 3 devices Nov 24 00:16:50.969247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:16:50.969256 kernel: NET: Registered PF_INET protocol family Nov 24 00:16:50.969267 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:16:50.969277 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 24 00:16:50.969286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:16:50.969295 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:16:50.969304 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 24 00:16:50.969313 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 24 00:16:50.969322 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:16:50.969331 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:16:50.969342 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:16:50.969351 kernel: NET: Registered PF_XDP protocol family Nov 24 00:16:50.969360 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:16:50.969369 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 00:16:50.969378 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Nov 24 00:16:50.969387 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 24 00:16:50.969395 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 24 00:16:50.969405 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 24 00:16:50.969414 kernel: clocksource: Switched to clocksource tsc Nov 24 00:16:50.969425 kernel: Initialise system trusted keyrings Nov 24 00:16:50.969434 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 24 00:16:50.969443 kernel: Key type asymmetric registered Nov 24 00:16:50.969452 kernel: Asymmetric key parser 'x509' registered Nov 24 00:16:50.969460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:16:50.969469 kernel: io scheduler mq-deadline registered Nov 24 00:16:50.969477 kernel: io scheduler kyber registered Nov 24 00:16:50.969487 kernel: io scheduler bfq registered Nov 24 00:16:50.969496 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:16:50.969506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:16:50.969516 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:16:50.969524 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 24 00:16:50.969533 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:16:50.969541 kernel: i8042: PNP: No PS/2 controller found. Nov 24 00:16:50.969682 kernel: rtc_cmos 00:02: registered as rtc0 Nov 24 00:16:50.969761 kernel: rtc_cmos 00:02: setting system clock to 2025-11-24T00:16:50 UTC (1763943410) Nov 24 00:16:50.969832 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 24 00:16:50.969845 kernel: intel_pstate: Intel P-state driver initializing Nov 24 00:16:50.969855 kernel: efifb: probing for efifb Nov 24 00:16:50.969864 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 24 00:16:50.969874 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 24 00:16:50.969883 kernel: efifb: scrolling: redraw Nov 24 00:16:50.969892 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:16:50.969901 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:16:50.970943 kernel: fb0: EFI VGA frame buffer device Nov 24 00:16:50.970956 kernel: pstore: Using crash dump compression: deflate Nov 24 00:16:50.970968 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:16:50.970974 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:16:50.970980 kernel: Segment Routing with IPv6 Nov 24 00:16:50.970986 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:16:50.970995 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:16:50.971006 kernel: Key type dns_resolver registered Nov 24 00:16:50.971014 kernel: IPI shorthand broadcast: enabled Nov 24 00:16:50.971022 kernel: sched_clock: Marking stable (3078124794, 92552261)->(3480040387, -309363332) Nov 24 00:16:50.971028 kernel: registered taskstats version 1 Nov 24 00:16:50.971035 kernel: Loading compiled-in X.509 certificates Nov 24 00:16:50.971041 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:16:50.971050 kernel: Demotion targets for Node 0: null Nov 24 00:16:50.971059 kernel: Key type .fscrypt registered Nov 24 00:16:50.971068 kernel: Key type fscrypt-provisioning registered Nov 24 00:16:50.971075 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:16:50.971080 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:16:50.971086 kernel: ima: No architecture policies found Nov 24 00:16:50.971091 kernel: clk: Disabling unused clocks Nov 24 00:16:50.971104 kernel: Warning: unable to open an initial console. Nov 24 00:16:50.971114 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:16:50.971122 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:16:50.971127 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:16:50.971133 kernel: Run /init as init process Nov 24 00:16:50.971138 kernel: with arguments: Nov 24 00:16:50.971147 kernel: /init Nov 24 00:16:50.971157 kernel: with environment: Nov 24 00:16:50.971165 kernel: HOME=/ Nov 24 00:16:50.971174 kernel: TERM=linux Nov 24 00:16:50.971181 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:16:50.971190 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:16:50.971200 systemd[1]: Detected virtualization microsoft. Nov 24 00:16:50.971211 systemd[1]: Detected architecture x86-64. Nov 24 00:16:50.971220 systemd[1]: Running in initrd. Nov 24 00:16:50.971229 systemd[1]: No hostname configured, using default hostname. Nov 24 00:16:50.971237 systemd[1]: Hostname set to . Nov 24 00:16:50.971243 systemd[1]: Initializing machine ID from random generator. Nov 24 00:16:50.971249 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:16:50.971261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:16:50.971271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:16:50.971283 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:16:50.971293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:16:50.971300 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:16:50.971308 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:16:50.971315 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:16:50.971324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:16:50.971334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:16:50.971344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:16:50.971353 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:16:50.971361 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:16:50.971368 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:16:50.971375 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:16:50.971382 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:16:50.971393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:16:50.971402 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:16:50.971409 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:16:50.971415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:16:50.971421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:16:50.971428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:16:50.971441 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:16:50.971451 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:16:50.971459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:16:50.971465 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:16:50.971471 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:16:50.971478 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:16:50.971487 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:16:50.971498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:16:50.971517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:16:50.971555 systemd-journald[187]: Collecting audit messages is disabled. Nov 24 00:16:50.971581 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:16:50.971593 systemd-journald[187]: Journal started Nov 24 00:16:50.971619 systemd-journald[187]: Runtime Journal (/run/log/journal/3548a6fd4ea74b488ddb57886710fd8f) is 8M, max 158.6M, 150.6M free. Nov 24 00:16:50.977970 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:16:50.980774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:16:50.986794 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:16:50.990590 systemd-modules-load[189]: Inserted module 'overlay' Nov 24 00:16:50.995142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:16:51.005074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:16:51.017223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:16:51.027512 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:16:51.028013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:16:51.031655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:16:51.047144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:16:51.047173 kernel: Bridge firewalling registered Nov 24 00:16:51.040330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:16:51.048009 systemd-modules-load[189]: Inserted module 'br_netfilter' Nov 24 00:16:51.051340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:16:51.055925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:16:51.062077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:16:51.075963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:16:51.080053 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:16:51.084354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:16:51.089984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:16:51.096026 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:16:51.120713 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:16:51.123493 systemd-resolved[225]: Positive Trust Anchors: Nov 24 00:16:51.123501 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:16:51.123539 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:16:51.126150 systemd-resolved[225]: Defaulting to hostname 'linux'. Nov 24 00:16:51.127036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:16:51.134060 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:16:51.210927 kernel: SCSI subsystem initialized Nov 24 00:16:51.218923 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:16:51.227949 kernel: iscsi: registered transport (tcp) Nov 24 00:16:51.246109 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:16:51.246154 kernel: QLogic iSCSI HBA Driver Nov 24 00:16:51.259627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:16:51.278451 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:16:51.284374 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:16:51.325504 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:16:51.329419 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:16:51.379926 kernel: raid6: avx512x4 gen() 45123 MB/s Nov 24 00:16:51.396922 kernel: raid6: avx512x2 gen() 43731 MB/s Nov 24 00:16:51.413920 kernel: raid6: avx512x1 gen() 25190 MB/s Nov 24 00:16:51.432919 kernel: raid6: avx2x4 gen() 34571 MB/s Nov 24 00:16:51.449917 kernel: raid6: avx2x2 gen() 36430 MB/s Nov 24 00:16:51.468057 kernel: raid6: avx2x1 gen() 29598 MB/s Nov 24 00:16:51.468072 kernel: raid6: using algorithm avx512x4 gen() 45123 MB/s Nov 24 00:16:51.487419 kernel: raid6: .... xor() 7779 MB/s, rmw enabled Nov 24 00:16:51.487444 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:16:51.505923 kernel: xor: automatically using best checksumming function avx Nov 24 00:16:51.628926 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:16:51.634433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:16:51.638882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:16:51.663212 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 24 00:16:51.668109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:16:51.674820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:16:51.696294 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Nov 24 00:16:51.715405 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:16:51.718007 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:16:51.749251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:16:51.756859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:16:51.803924 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:16:51.826124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:16:51.829016 kernel: AES CTR mode by8 optimization enabled Nov 24 00:16:51.828696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:16:51.832500 kernel: hv_vmbus: Vmbus version:5.3 Nov 24 00:16:51.834923 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:16:51.843704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:16:51.850583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:16:51.850667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:16:51.863237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:16:51.877999 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 24 00:16:51.878020 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 24 00:16:51.884032 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 24 00:16:51.891949 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 24 00:16:51.895559 kernel: hv_vmbus: registering driver hv_netvsc Nov 24 00:16:51.897942 kernel: hv_vmbus: registering driver hid_hyperv Nov 24 00:16:51.901928 kernel: hv_vmbus: registering driver hv_pci Nov 24 00:16:51.901960 kernel: PTP clock support registered Nov 24 00:16:51.904658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:16:51.915117 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 24 00:16:51.919539 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 24 00:16:51.919581 kernel: hv_utils: Registering HyperV Utility Driver Nov 24 00:16:51.920566 kernel: hv_vmbus: registering driver hv_utils Nov 24 00:16:51.926152 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 24 00:16:51.926837 kernel: hv_utils: Shutdown IC version 3.2 Nov 24 00:16:51.926971 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad5c224 (unnamed net_device) (uninitialized): VF slot 1 added Nov 24 00:16:51.929573 kernel: hv_utils: Heartbeat IC version 3.0 Nov 24 00:16:51.929785 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 24 00:16:51.930061 kernel: hv_utils: TimeSync IC version 4.0 Nov 24 00:16:52.030102 systemd-resolved[225]: Clock change detected. Flushing caches. Nov 24 00:16:52.038938 kernel: hv_vmbus: registering driver hv_storvsc Nov 24 00:16:52.038998 kernel: scsi host0: storvsc_host_t Nov 24 00:16:52.039787 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 24 00:16:52.067737 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 24 00:16:52.068012 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 24 00:16:52.071183 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:16:52.077265 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 24 00:16:52.082259 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 24 00:16:52.084316 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 24 00:16:52.084458 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 00:16:52.087178 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 24 00:16:52.099288 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:16:52.099477 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 24 00:16:52.114185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:16:52.118331 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 24 00:16:52.121142 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 24 00:16:52.138180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:16:52.283190 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:16:52.291182 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:16:52.532191 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:16:52.688294 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 24 00:16:52.707478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:16:52.779997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:16:52.784296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:16:52.786647 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:16:52.800171 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 24 00:16:52.802481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:16:52.808214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:16:52.813207 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:16:52.819721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:16:52.830517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:16:52.843516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:16:52.845242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:16:52.857185 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:16:53.052213 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 24 00:16:53.057082 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 24 00:16:53.057268 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 24 00:16:53.058650 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:16:53.064271 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 24 00:16:53.068258 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 24 00:16:53.073377 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 24 00:16:53.073400 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 24 00:16:53.094261 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:16:53.094436 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 24 00:16:53.097513 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 24 00:16:53.102243 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 24 00:16:53.111178 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 24 00:16:53.114382 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad5c224 eth0: VF registering: eth1 Nov 24 00:16:53.114536 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 24 00:16:53.118209 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 24 00:16:53.862746 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:16:53.862810 disk-uuid[653]: The operation has completed successfully. Nov 24 00:16:53.911500 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:16:53.911587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:16:53.949791 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:16:53.967320 sh[694]: Success Nov 24 00:16:53.997373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:16:53.997437 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:16:53.999027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:16:54.009202 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:16:54.238803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:16:54.243631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:16:54.257613 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:16:54.271187 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (707) Nov 24 00:16:54.271223 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:16:54.273870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:16:54.513546 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:16:54.513639 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:16:54.514675 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:16:54.706907 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:16:54.709931 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:16:54.715554 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:16:54.718768 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:16:54.723518 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:16:54.746198 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (736) Nov 24 00:16:54.749890 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:16:54.749927 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:16:54.770889 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:16:54.770929 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:16:54.772426 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:16:54.779212 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:16:54.780391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:16:54.786148 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:16:54.809900 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:16:54.812681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:16:54.839047 systemd-networkd[876]: lo: Link UP Nov 24 00:16:54.839056 systemd-networkd[876]: lo: Gained carrier Nov 24 00:16:54.840355 systemd-networkd[876]: Enumeration completed Nov 24 00:16:54.845094 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:16:54.840746 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:16:54.854031 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:16:54.854251 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad5c224 eth0: Data path switched to VF: enP30832s1 Nov 24 00:16:54.840750 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:16:54.842652 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:16:54.846297 systemd[1]: Reached target network.target - Network. Nov 24 00:16:54.854793 systemd-networkd[876]: enP30832s1: Link UP Nov 24 00:16:54.854869 systemd-networkd[876]: eth0: Link UP Nov 24 00:16:54.854956 systemd-networkd[876]: eth0: Gained carrier Nov 24 00:16:54.854967 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:16:54.859311 systemd-networkd[876]: enP30832s1: Gained carrier Nov 24 00:16:54.886196 systemd-networkd[876]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:16:55.795621 ignition[837]: Ignition 2.22.0 Nov 24 00:16:55.795633 ignition[837]: Stage: fetch-offline Nov 24 00:16:55.797351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:16:55.795734 ignition[837]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:55.800728 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:16:55.795740 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:55.795832 ignition[837]: parsed url from cmdline: "" Nov 24 00:16:55.795835 ignition[837]: no config URL provided Nov 24 00:16:55.795840 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:16:55.795845 ignition[837]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:16:55.795850 ignition[837]: failed to fetch config: resource requires networking Nov 24 00:16:55.796006 ignition[837]: Ignition finished successfully Nov 24 00:16:55.832646 ignition[886]: Ignition 2.22.0 Nov 24 00:16:55.832664 ignition[886]: Stage: fetch Nov 24 00:16:55.832888 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:55.833548 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:55.833656 ignition[886]: parsed url from cmdline: "" Nov 24 00:16:55.833658 ignition[886]: no config URL provided Nov 24 00:16:55.833663 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:16:55.833667 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:16:55.833683 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 24 00:16:55.956278 ignition[886]: GET result: OK Nov 24 00:16:55.956373 ignition[886]: config has been read from IMDS userdata Nov 24 00:16:55.956401 ignition[886]: parsing config with SHA512: 7a31d3a16c37f795531621826b9569b095e12928c90fbe1817765b9aeab9df5ad82741b8014539bfbc81a12e390cdd9a1c2a93d5442febcb9283276f79df075e Nov 24 00:16:55.960402 unknown[886]: fetched base config from "system" Nov 24 00:16:55.960417 unknown[886]: fetched base config from "system" Nov 24 00:16:55.960423 unknown[886]: fetched user config from "azure" Nov 24 00:16:55.963147 ignition[886]: fetch: fetch complete Nov 24 00:16:55.966042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:16:55.963153 ignition[886]: fetch: fetch passed Nov 24 00:16:55.971049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:16:55.963235 ignition[886]: Ignition finished successfully Nov 24 00:16:56.014417 ignition[892]: Ignition 2.22.0 Nov 24 00:16:56.014426 ignition[892]: Stage: kargs Nov 24 00:16:56.014670 ignition[892]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:56.018061 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:16:56.014678 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:56.023131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:16:56.015640 ignition[892]: kargs: kargs passed Nov 24 00:16:56.015675 ignition[892]: Ignition finished successfully Nov 24 00:16:56.062479 ignition[899]: Ignition 2.22.0 Nov 24 00:16:56.062486 ignition[899]: Stage: disks Nov 24 00:16:56.062672 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:56.065861 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:16:56.062678 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:56.068404 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:16:56.063752 ignition[899]: disks: disks passed Nov 24 00:16:56.071217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:16:56.063792 ignition[899]: Ignition finished successfully Nov 24 00:16:56.076746 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:16:56.082738 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:16:56.085217 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:16:56.089423 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:16:56.165831 systemd-fsck[907]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 24 00:16:56.171608 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:16:56.176117 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:16:56.419188 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:16:56.420292 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:16:56.422858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:16:56.440650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:16:56.455244 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:16:56.460610 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 24 00:16:56.470652 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (916) Nov 24 00:16:56.470676 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:16:56.470688 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:16:56.468458 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:16:56.468546 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:16:56.485357 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:16:56.485387 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:16:56.485398 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:16:56.473834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:16:56.484539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:16:56.489787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:16:56.800291 systemd-networkd[876]: eth0: Gained IPv6LL Nov 24 00:16:56.995317 coreos-metadata[918]: Nov 24 00:16:56.995 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:16:56.998313 coreos-metadata[918]: Nov 24 00:16:56.998 INFO Fetch successful Nov 24 00:16:56.999831 coreos-metadata[918]: Nov 24 00:16:56.998 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:16:57.007376 coreos-metadata[918]: Nov 24 00:16:57.007 INFO Fetch successful Nov 24 00:16:57.024350 coreos-metadata[918]: Nov 24 00:16:57.024 INFO wrote hostname ci-4459.2.1-a-8bf8e53aa8 to /sysroot/etc/hostname Nov 24 00:16:57.028333 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:16:57.269504 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:16:57.315362 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:16:57.333691 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:16:57.338384 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:16:58.216945 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:16:58.221288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:16:58.225277 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:16:58.241467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:16:58.244374 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:16:58.261971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:16:58.272376 ignition[1036]: INFO : Ignition 2.22.0 Nov 24 00:16:58.272376 ignition[1036]: INFO : Stage: mount Nov 24 00:16:58.275315 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:58.275315 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:58.275315 ignition[1036]: INFO : mount: mount passed Nov 24 00:16:58.275315 ignition[1036]: INFO : Ignition finished successfully Nov 24 00:16:58.278620 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:16:58.284674 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:16:58.296633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:16:58.317186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1049) Nov 24 00:16:58.319261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:16:58.319358 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:16:58.323488 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:16:58.323532 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:16:58.324580 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:16:58.326353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:16:58.353527 ignition[1066]: INFO : Ignition 2.22.0 Nov 24 00:16:58.353527 ignition[1066]: INFO : Stage: files Nov 24 00:16:58.358097 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:58.358097 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:58.358097 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:16:58.367902 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:16:58.367902 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:16:58.396497 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:16:58.400239 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:16:58.400239 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:16:58.398852 unknown[1066]: wrote ssh authorized keys file for user: core Nov 24 00:16:58.413864 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:16:58.417402 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 24 00:16:58.452120 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:16:58.529428 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:16:58.529428 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:16:58.538225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:16:58.561222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 24 00:16:58.645211 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:16:58.874741 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:16:58.874741 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:16:58.901887 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:16:58.913177 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:16:58.913177 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:16:58.920247 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:16:58.920247 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:16:58.920247 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:16:58.920247 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:16:58.920247 ignition[1066]: INFO : files: files passed Nov 24 00:16:58.920247 ignition[1066]: INFO : Ignition finished successfully Nov 24 00:16:58.919533 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:16:58.936063 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:16:58.940459 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:16:58.955396 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:16:58.955939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:16:58.973457 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:16:58.973457 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:16:58.981652 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:16:58.977403 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:16:58.981955 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:16:58.991817 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:16:59.029498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:16:59.029602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:16:59.036503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:16:59.038021 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:16:59.042194 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:16:59.043095 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:16:59.075619 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:16:59.079914 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:16:59.104802 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:16:59.105371 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:16:59.105728 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:16:59.106369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:16:59.106474 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:16:59.107027 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:16:59.107754 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:16:59.108063 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:16:59.108419 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:16:59.108769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:16:59.109137 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:16:59.111514 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:16:59.111842 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:16:59.112104 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:16:59.112778 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:16:59.113028 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:16:59.149281 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:16:59.149432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:16:59.154578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:16:59.156757 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:16:59.162478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:16:59.165001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:16:59.169425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:16:59.170660 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:16:59.173625 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:16:59.175425 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:16:59.180707 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:16:59.182103 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:16:59.185330 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 24 00:16:59.185444 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:16:59.192175 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:16:59.199253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:16:59.199580 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:16:59.209371 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:16:59.215270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:16:59.216091 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:16:59.231234 ignition[1120]: INFO : Ignition 2.22.0 Nov 24 00:16:59.231234 ignition[1120]: INFO : Stage: umount Nov 24 00:16:59.231234 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:16:59.231234 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:16:59.231234 ignition[1120]: INFO : umount: umount passed Nov 24 00:16:59.231234 ignition[1120]: INFO : Ignition finished successfully Nov 24 00:16:59.228649 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:16:59.228754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:16:59.244325 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:16:59.244422 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:16:59.251336 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:16:59.251531 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:16:59.252504 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:16:59.252550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:16:59.252605 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:16:59.252635 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:16:59.260242 systemd[1]: Stopped target network.target - Network. Nov 24 00:16:59.264216 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:16:59.264262 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:16:59.268675 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:16:59.273752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:16:59.279140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:16:59.284224 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:16:59.288215 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:16:59.290571 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:16:59.290611 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:16:59.292693 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:16:59.292724 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:16:59.295880 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:16:59.295924 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:16:59.300239 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:16:59.300277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:16:59.304377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:16:59.308267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:16:59.313337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:16:59.315231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:16:59.315322 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:16:59.318489 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:16:59.318586 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:16:59.325083 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:16:59.325338 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:16:59.325428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:16:59.327741 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:16:59.329874 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:16:59.331190 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:16:59.331226 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:16:59.332141 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:16:59.339289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:16:59.339344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:16:59.341763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:16:59.341806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:16:59.346769 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:16:59.346813 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:16:59.350408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:16:59.350454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:16:59.351025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:16:59.372641 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:16:59.372707 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:16:59.376487 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:16:59.376622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:16:59.379813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:16:59.379857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:16:59.398185 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad5c224 eth0: Data path switched from VF: enP30832s1 Nov 24 00:16:59.400214 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:16:59.409214 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:16:59.409253 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:16:59.412777 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:16:59.412827 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:16:59.413097 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:16:59.413129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:16:59.413418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:16:59.413446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:16:59.421284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:16:59.423434 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:16:59.423785 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:16:59.427624 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:16:59.427670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:16:59.430943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:16:59.430980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:16:59.451568 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:16:59.451624 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:16:59.451657 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:16:59.451963 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:16:59.452043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:16:59.457699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:16:59.457775 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:16:59.638847 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:16:59.638958 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:16:59.642007 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:16:59.642098 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:16:59.642147 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:16:59.643286 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:16:59.658276 systemd[1]: Switching root. Nov 24 00:16:59.730747 systemd-journald[187]: Journal stopped Nov 24 00:17:03.406562 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 24 00:17:03.406592 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:17:03.406607 kernel: SELinux: policy capability open_perms=1 Nov 24 00:17:03.406617 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:17:03.406626 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:17:03.406636 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:17:03.406646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:17:03.406655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:17:03.406665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:17:03.406674 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:17:03.406683 kernel: audit: type=1403 audit(1763943420.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:17:03.406694 systemd[1]: Successfully loaded SELinux policy in 143.715ms. Nov 24 00:17:03.406706 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.186ms. Nov 24 00:17:03.406719 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:17:03.406731 systemd[1]: Detected virtualization microsoft. Nov 24 00:17:03.406741 systemd[1]: Detected architecture x86-64. Nov 24 00:17:03.406750 systemd[1]: Detected first boot. Nov 24 00:17:03.406762 systemd[1]: Hostname set to . Nov 24 00:17:03.406772 systemd[1]: Initializing machine ID from random generator. Nov 24 00:17:03.406783 zram_generator::config[1162]: No configuration found. Nov 24 00:17:03.406797 kernel: Guest personality initialized and is inactive Nov 24 00:17:03.406805 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Nov 24 00:17:03.406814 kernel: Initialized host personality Nov 24 00:17:03.406823 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:17:03.406832 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:17:03.406843 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:17:03.406854 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:17:03.406864 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:17:03.406876 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:17:03.406887 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:17:03.406897 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:17:03.406906 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:17:03.406916 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:17:03.406927 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:17:03.406937 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:17:03.406950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:17:03.406960 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:17:03.406970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:17:03.406980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:17:03.408214 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:17:03.408233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:17:03.408244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:17:03.408255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:17:03.408267 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:17:03.408277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:17:03.408288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:17:03.408299 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:17:03.408311 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:17:03.408321 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:17:03.408331 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:17:03.408345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:17:03.408354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:17:03.408364 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:17:03.408374 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:17:03.408384 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:17:03.408394 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:17:03.408407 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:17:03.408417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:17:03.408427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:17:03.408437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:17:03.408447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:17:03.408457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:17:03.408468 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:17:03.408481 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:17:03.408492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:03.408504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:17:03.408514 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:17:03.408525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:17:03.408537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:17:03.408547 systemd[1]: Reached target machines.target - Containers. Nov 24 00:17:03.408558 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:17:03.408569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:17:03.408582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:17:03.408592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:17:03.408603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:17:03.408613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:17:03.408623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:17:03.408634 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:17:03.408644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:17:03.408654 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:17:03.408666 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:17:03.408676 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:17:03.408687 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:17:03.408698 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:17:03.408709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:17:03.408719 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:17:03.408730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:17:03.408740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:17:03.408753 kernel: fuse: init (API version 7.41) Nov 24 00:17:03.408763 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:17:03.408774 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:17:03.408784 kernel: loop: module loaded Nov 24 00:17:03.408795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:17:03.408806 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:17:03.408817 systemd[1]: Stopped verity-setup.service. Nov 24 00:17:03.408828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:03.408862 systemd-journald[1252]: Collecting audit messages is disabled. Nov 24 00:17:03.408890 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:17:03.408902 systemd-journald[1252]: Journal started Nov 24 00:17:03.408929 systemd-journald[1252]: Runtime Journal (/run/log/journal/eb65f938249b4fbeb5317b8429474589) is 8M, max 158.6M, 150.6M free. Nov 24 00:17:03.415225 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:17:02.991583 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:17:03.002782 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:17:03.003209 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:17:03.420144 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:17:03.420782 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:17:03.423361 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:17:03.426333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:17:03.427838 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:17:03.429181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:17:03.432468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:17:03.434238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:17:03.434476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:17:03.436641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:17:03.436887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:17:03.439965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:17:03.440227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:17:03.442642 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:17:03.442888 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:17:03.448514 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:17:03.448759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:17:03.451825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:17:03.455141 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:17:03.458561 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:17:03.461905 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:17:03.473184 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:17:03.478749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:17:03.483314 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:17:03.486121 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:17:03.486151 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:17:03.489656 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:17:03.496289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:17:03.501322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:17:03.504273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:17:03.517978 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:17:03.521468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:17:03.523485 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:17:03.527389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:17:03.528460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:17:03.534607 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:17:03.538623 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:17:03.543213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:17:03.545446 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:17:03.548140 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:17:03.562591 kernel: ACPI: bus type drm_connector registered Nov 24 00:17:03.563608 systemd-journald[1252]: Time spent on flushing to /var/log/journal/eb65f938249b4fbeb5317b8429474589 is 13.319ms for 989 entries. Nov 24 00:17:03.563608 systemd-journald[1252]: System Journal (/var/log/journal/eb65f938249b4fbeb5317b8429474589) is 8M, max 2.6G, 2.6G free. Nov 24 00:17:03.610052 systemd-journald[1252]: Received client request to flush runtime journal. Nov 24 00:17:03.563764 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:17:03.563931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:17:03.574424 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:17:03.577363 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:17:03.581682 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:17:03.611994 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:17:03.618178 kernel: loop0: detected capacity change from 0 to 27936 Nov 24 00:17:03.637086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:17:03.647968 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:17:03.749889 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:17:03.755218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:17:03.805817 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Nov 24 00:17:03.805833 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Nov 24 00:17:03.808508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:17:03.930207 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:17:03.980192 kernel: loop1: detected capacity change from 0 to 224512 Nov 24 00:17:04.004645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:17:04.039184 kernel: loop2: detected capacity change from 0 to 128560 Nov 24 00:17:04.063065 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:17:04.066064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:17:04.095862 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 24 00:17:04.221686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:17:04.229367 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:17:04.286159 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:17:04.296598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:17:04.367189 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:17:04.384194 kernel: loop3: detected capacity change from 0 to 110984 Nov 24 00:17:04.401205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#27 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:17:04.413209 kernel: hv_vmbus: registering driver hv_balloon Nov 24 00:17:04.417185 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 24 00:17:04.423191 kernel: hv_vmbus: registering driver hyperv_fb Nov 24 00:17:04.430551 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:17:04.452188 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 24 00:17:04.456185 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 24 00:17:04.458425 kernel: Console: switching to colour dummy device 80x25 Nov 24 00:17:04.465517 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:17:04.550597 systemd-networkd[1340]: lo: Link UP Nov 24 00:17:04.550893 systemd-networkd[1340]: lo: Gained carrier Nov 24 00:17:04.553310 systemd-networkd[1340]: Enumeration completed Nov 24 00:17:04.553464 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:17:04.556750 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:17:04.558188 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:17:04.559280 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:17:04.564927 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:17:04.565312 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:17:04.576191 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:17:04.580221 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad5c224 eth0: Data path switched to VF: enP30832s1 Nov 24 00:17:04.583359 systemd-networkd[1340]: enP30832s1: Link UP Nov 24 00:17:04.583527 systemd-networkd[1340]: eth0: Link UP Nov 24 00:17:04.583573 systemd-networkd[1340]: eth0: Gained carrier Nov 24 00:17:04.583625 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:17:04.588391 systemd-networkd[1340]: enP30832s1: Gained carrier Nov 24 00:17:04.599215 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:17:04.600524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:17:04.611807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:17:04.611984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:17:04.616383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:17:04.631681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:17:04.631844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:17:04.640316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:17:04.671435 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:17:04.729000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:17:04.740283 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:17:04.745236 kernel: loop4: detected capacity change from 0 to 27936 Nov 24 00:17:04.758184 kernel: loop5: detected capacity change from 0 to 224512 Nov 24 00:17:04.772176 kernel: loop6: detected capacity change from 0 to 128560 Nov 24 00:17:04.780227 kernel: loop7: detected capacity change from 0 to 110984 Nov 24 00:17:04.790383 (sd-merge)[1420]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 24 00:17:04.790742 (sd-merge)[1420]: Merged extensions into '/usr'. Nov 24 00:17:04.793774 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:17:04.793786 systemd[1]: Reloading... Nov 24 00:17:04.822190 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 24 00:17:04.870211 zram_generator::config[1454]: No configuration found. Nov 24 00:17:05.072213 systemd[1]: Reloading finished in 278 ms. Nov 24 00:17:05.091186 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:17:05.093334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:17:05.096494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:17:05.108979 systemd[1]: Starting ensure-sysext.service... Nov 24 00:17:05.112253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:17:05.128421 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:17:05.128439 systemd[1]: Reloading... Nov 24 00:17:05.145733 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:17:05.145942 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:17:05.146157 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:17:05.146503 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:17:05.147306 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:17:05.147624 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 24 00:17:05.147721 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Nov 24 00:17:05.181665 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:17:05.182034 systemd-tmpfiles[1515]: Skipping /boot Nov 24 00:17:05.191426 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:17:05.191734 systemd-tmpfiles[1515]: Skipping /boot Nov 24 00:17:05.197186 zram_generator::config[1549]: No configuration found. Nov 24 00:17:05.377431 systemd[1]: Reloading finished in 248 ms. Nov 24 00:17:05.405587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:17:05.417203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.420289 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:17:05.430055 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:17:05.432551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:17:05.434655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:17:05.440387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:17:05.445979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:17:05.448310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:17:05.448438 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:17:05.451132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:17:05.457384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:17:05.464809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:17:05.467505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.469583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:17:05.469764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:17:05.473232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:17:05.473393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:17:05.478708 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:17:05.479329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:17:05.486730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.486905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:17:05.489961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:17:05.493950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:17:05.497533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:17:05.499298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:17:05.499421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:17:05.499515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.509721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:17:05.509868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:17:05.514451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.514774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:17:05.518394 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:17:05.520856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:17:05.520984 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:17:05.521102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:17:05.521304 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:17:05.523919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:17:05.524838 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:17:05.529675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:17:05.530640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:17:05.540362 systemd[1]: Finished ensure-sysext.service. Nov 24 00:17:05.542526 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:17:05.542693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:17:05.547617 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:17:05.547768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:17:05.555109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:17:05.560454 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:17:05.604231 systemd-resolved[1613]: Positive Trust Anchors: Nov 24 00:17:05.604463 systemd-resolved[1613]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:17:05.604543 systemd-resolved[1613]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:17:05.609083 systemd-resolved[1613]: Using system hostname 'ci-4459.2.1-a-8bf8e53aa8'. Nov 24 00:17:05.609703 augenrules[1648]: No rules Nov 24 00:17:05.610828 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:17:05.611044 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:17:05.612824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:17:05.614629 systemd[1]: Reached target network.target - Network. Nov 24 00:17:05.616042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:17:06.019683 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:17:06.022289 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:17:06.336299 systemd-networkd[1340]: eth0: Gained IPv6LL Nov 24 00:17:06.338313 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:17:06.341411 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:17:08.432322 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:17:08.445367 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:17:08.448502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:17:08.482247 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:17:08.484178 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:17:08.485829 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:17:08.489286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:17:08.492235 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:17:08.494135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:17:08.495530 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:17:08.497267 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:17:08.500217 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:17:08.500249 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:17:08.501446 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:17:08.516312 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:17:08.518947 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:17:08.523801 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:17:08.525968 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:17:08.527858 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:17:08.530917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:17:08.532825 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:17:08.536829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:17:08.538981 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:17:08.540385 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:17:08.543256 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:17:08.543282 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:17:08.545111 systemd[1]: Starting chronyd.service - NTP client/server... Nov 24 00:17:08.549358 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:17:08.554908 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:17:08.560329 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:17:08.565005 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:17:08.571304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:17:08.575620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:17:08.577019 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:17:08.582306 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:17:08.584050 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 24 00:17:08.585631 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 24 00:17:08.587550 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 24 00:17:08.591692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:08.599201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:17:08.604916 jq[1666]: false Nov 24 00:17:08.605335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:17:08.607793 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:17:08.615337 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:17:08.622212 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Refreshing passwd entry cache Nov 24 00:17:08.621346 oslogin_cache_refresh[1668]: Refreshing passwd entry cache Nov 24 00:17:08.622944 KVP[1672]: KVP starting; pid is:1672 Nov 24 00:17:08.625334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:17:08.632654 kernel: hv_utils: KVP IC version 4.0 Nov 24 00:17:08.632268 KVP[1672]: KVP LIC Version: 3.1 Nov 24 00:17:08.633024 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:17:08.636925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:17:08.642975 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:17:08.646562 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:17:08.648981 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Failure getting users, quitting Nov 24 00:17:08.648981 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:17:08.648981 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Refreshing group entry cache Nov 24 00:17:08.648599 oslogin_cache_refresh[1668]: Failure getting users, quitting Nov 24 00:17:08.648616 oslogin_cache_refresh[1668]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:17:08.648656 oslogin_cache_refresh[1668]: Refreshing group entry cache Nov 24 00:17:08.651090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:17:08.658379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:17:08.661577 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:17:08.666138 extend-filesystems[1667]: Found /dev/nvme0n1p6 Nov 24 00:17:08.665962 oslogin_cache_refresh[1668]: Failure getting groups, quitting Nov 24 00:17:08.675822 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Failure getting groups, quitting Nov 24 00:17:08.675822 google_oslogin_nss_cache[1668]: oslogin_cache_refresh[1668]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:17:08.668366 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:17:08.665972 oslogin_cache_refresh[1668]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:17:08.668663 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:17:08.668834 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:17:08.685441 extend-filesystems[1667]: Found /dev/nvme0n1p9 Nov 24 00:17:08.688730 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:17:08.689193 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:17:08.691939 extend-filesystems[1667]: Checking size of /dev/nvme0n1p9 Nov 24 00:17:08.697260 chronyd[1661]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 24 00:17:08.694055 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:17:08.698611 jq[1685]: true Nov 24 00:17:08.694249 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:17:08.717682 chronyd[1661]: Timezone right/UTC failed leap second check, ignoring Nov 24 00:17:08.718338 chronyd[1661]: Loaded seccomp filter (level 2) Nov 24 00:17:08.720227 systemd[1]: Started chronyd.service - NTP client/server. Nov 24 00:17:08.722332 (ntainerd)[1704]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:17:08.727839 jq[1703]: true Nov 24 00:17:08.734627 extend-filesystems[1667]: Old size kept for /dev/nvme0n1p9 Nov 24 00:17:08.744037 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:17:08.745352 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:17:08.749738 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:17:08.786183 update_engine[1684]: I20251124 00:17:08.785192 1684 main.cc:92] Flatcar Update Engine starting Nov 24 00:17:08.798120 tar[1696]: linux-amd64/LICENSE Nov 24 00:17:08.798332 tar[1696]: linux-amd64/helm Nov 24 00:17:08.840861 systemd-logind[1680]: New seat seat0. Nov 24 00:17:08.847557 systemd-logind[1680]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 24 00:17:08.847712 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:17:08.857427 dbus-daemon[1664]: [system] SELinux support is enabled Nov 24 00:17:08.857552 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:17:08.864000 bash[1741]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:17:08.863680 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:17:08.863703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:17:08.866577 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:17:08.866597 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:17:08.869830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:17:08.875469 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 00:17:08.879300 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:17:08.881263 update_engine[1684]: I20251124 00:17:08.880610 1684 update_check_scheduler.cc:74] Next update check in 4m40s Nov 24 00:17:08.886454 dbus-daemon[1664]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:17:08.899819 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:17:08.960523 coreos-metadata[1663]: Nov 24 00:17:08.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:17:08.965899 coreos-metadata[1663]: Nov 24 00:17:08.965 INFO Fetch successful Nov 24 00:17:08.965899 coreos-metadata[1663]: Nov 24 00:17:08.965 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 24 00:17:08.971596 coreos-metadata[1663]: Nov 24 00:17:08.971 INFO Fetch successful Nov 24 00:17:08.971704 coreos-metadata[1663]: Nov 24 00:17:08.971 INFO Fetching http://168.63.129.16/machine/d37ef4d8-7ae0-4d8e-bb89-32a257718c5e/53be6753%2D97c9%2D478c%2D84d8%2D1ae35fc2e4e1.%5Fci%2D4459.2.1%2Da%2D8bf8e53aa8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 24 00:17:08.975367 coreos-metadata[1663]: Nov 24 00:17:08.974 INFO Fetch successful Nov 24 00:17:08.975485 coreos-metadata[1663]: Nov 24 00:17:08.975 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:17:08.988496 coreos-metadata[1663]: Nov 24 00:17:08.987 INFO Fetch successful Nov 24 00:17:09.034683 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:17:09.039609 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:17:09.199125 sshd_keygen[1702]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:17:09.225102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:17:09.228135 locksmithd[1757]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:17:09.232751 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:17:09.242388 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 24 00:17:09.270258 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:17:09.272903 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:17:09.277410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:17:09.284832 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 24 00:17:09.306819 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:17:09.312341 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:17:09.316832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:17:09.320591 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:17:09.433091 tar[1696]: linux-amd64/README.md Nov 24 00:17:09.449570 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:17:09.795636 containerd[1704]: time="2025-11-24T00:17:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:17:09.797183 containerd[1704]: time="2025-11-24T00:17:09.796649057Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:17:09.807215 containerd[1704]: time="2025-11-24T00:17:09.807183280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.96µs" Nov 24 00:17:09.807310 containerd[1704]: time="2025-11-24T00:17:09.807297392Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:17:09.807369 containerd[1704]: time="2025-11-24T00:17:09.807359944Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:17:09.807516 containerd[1704]: time="2025-11-24T00:17:09.807507531Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:17:09.807556 containerd[1704]: time="2025-11-24T00:17:09.807548848Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:17:09.807615 containerd[1704]: time="2025-11-24T00:17:09.807607780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:17:09.807692 containerd[1704]: time="2025-11-24T00:17:09.807682182Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:17:09.807726 containerd[1704]: time="2025-11-24T00:17:09.807719112Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:17:09.807983 containerd[1704]: time="2025-11-24T00:17:09.807969232Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808021 containerd[1704]: time="2025-11-24T00:17:09.808012871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808057 containerd[1704]: time="2025-11-24T00:17:09.808048490Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808100 containerd[1704]: time="2025-11-24T00:17:09.808091654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808214 containerd[1704]: time="2025-11-24T00:17:09.808203673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808435 containerd[1704]: time="2025-11-24T00:17:09.808407344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808480 containerd[1704]: time="2025-11-24T00:17:09.808437503Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:17:09.808480 containerd[1704]: time="2025-11-24T00:17:09.808448825Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:17:09.808547 containerd[1704]: time="2025-11-24T00:17:09.808485601Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:17:09.809523 containerd[1704]: time="2025-11-24T00:17:09.808764071Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:17:09.809523 containerd[1704]: time="2025-11-24T00:17:09.808827011Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823762481Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823827417Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823846489Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823859785Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823877735Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823890102Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823904897Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823917445Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823928288Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823946551Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823957171Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.823970225Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.824077471Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:17:09.824153 containerd[1704]: time="2025-11-24T00:17:09.824098689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824112905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824125188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824136453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824148189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824158770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824266865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824281144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824292638Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824303271Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824358838Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824377708Z" level=info msg="Start snapshots syncer" Nov 24 00:17:09.824505 containerd[1704]: time="2025-11-24T00:17:09.824458821Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:17:09.824835 containerd[1704]: time="2025-11-24T00:17:09.824798101Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:17:09.824949 containerd[1704]: time="2025-11-24T00:17:09.824863879Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:17:09.824949 containerd[1704]: time="2025-11-24T00:17:09.824916586Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:17:09.825069 containerd[1704]: time="2025-11-24T00:17:09.825048074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:17:09.825095 containerd[1704]: time="2025-11-24T00:17:09.825069831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:17:09.825095 containerd[1704]: time="2025-11-24T00:17:09.825081364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:17:09.825095 containerd[1704]: time="2025-11-24T00:17:09.825092215Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825116054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825140021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825152957Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825191952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825202743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825213504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825258842Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825273980Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825283100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825293197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825301077Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825357109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825373423Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:17:09.826656 containerd[1704]: time="2025-11-24T00:17:09.825397509Z" level=info msg="runtime interface created" Nov 24 00:17:09.826925 containerd[1704]: time="2025-11-24T00:17:09.825403122Z" level=info msg="created NRI interface" Nov 24 00:17:09.826925 containerd[1704]: time="2025-11-24T00:17:09.825412407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:17:09.826925 containerd[1704]: time="2025-11-24T00:17:09.825423193Z" level=info msg="Connect containerd service" Nov 24 00:17:09.826925 containerd[1704]: time="2025-11-24T00:17:09.825441973Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:17:09.826925 containerd[1704]: time="2025-11-24T00:17:09.826281959Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:17:09.999187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:10.013481 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:17:10.288004 containerd[1704]: time="2025-11-24T00:17:10.286732894Z" level=info msg="Start subscribing containerd event" Nov 24 00:17:10.288276 containerd[1704]: time="2025-11-24T00:17:10.287986104Z" level=info msg="Start recovering state" Nov 24 00:17:10.288493 containerd[1704]: time="2025-11-24T00:17:10.288455649Z" level=info msg="Start event monitor" Nov 24 00:17:10.288493 containerd[1704]: time="2025-11-24T00:17:10.288476103Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:17:10.288493 containerd[1704]: time="2025-11-24T00:17:10.288485397Z" level=info msg="Start streaming server" Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288499984Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288507280Z" level=info msg="runtime interface starting up..." Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288513657Z" level=info msg="starting plugins..." Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288526286Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288705828Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:17:10.289654 containerd[1704]: time="2025-11-24T00:17:10.288743896Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:17:10.289481 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:17:10.291943 containerd[1704]: time="2025-11-24T00:17:10.290269259Z" level=info msg="containerd successfully booted in 0.495322s" Nov 24 00:17:10.291968 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:17:10.295637 systemd[1]: Startup finished in 3.194s (kernel) + 9.772s (initrd) + 9.733s (userspace) = 22.700s. Nov 24 00:17:10.501864 kubelet[1824]: E1124 00:17:10.501818 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:17:10.503794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:17:10.503932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:17:10.504410 systemd[1]: kubelet.service: Consumed 940ms CPU time, 262M memory peak. Nov 24 00:17:10.617080 login[1801]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 24 00:17:10.618579 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:17:10.627407 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:17:10.628686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:17:10.639333 systemd-logind[1680]: New session 1 of user core. Nov 24 00:17:10.647440 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:17:10.649604 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:17:10.657907 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:17:10.660285 systemd-logind[1680]: New session c1 of user core. Nov 24 00:17:10.813971 systemd[1841]: Queued start job for default target default.target. Nov 24 00:17:10.820947 systemd[1841]: Created slice app.slice - User Application Slice. Nov 24 00:17:10.820973 systemd[1841]: Reached target paths.target - Paths. Nov 24 00:17:10.821082 systemd[1841]: Reached target timers.target - Timers. Nov 24 00:17:10.822010 systemd[1841]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:17:10.830912 systemd[1841]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:17:10.830969 systemd[1841]: Reached target sockets.target - Sockets. Nov 24 00:17:10.831004 systemd[1841]: Reached target basic.target - Basic System. Nov 24 00:17:10.831075 systemd[1841]: Reached target default.target - Main User Target. Nov 24 00:17:10.831104 systemd[1841]: Startup finished in 166ms. Nov 24 00:17:10.831133 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:17:10.833197 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:17:11.103999 waagent[1798]: 2025-11-24T00:17:11.103862Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 24 00:17:11.105633 waagent[1798]: 2025-11-24T00:17:11.105551Z INFO Daemon Daemon OS: flatcar 4459.2.1 Nov 24 00:17:11.106857 waagent[1798]: 2025-11-24T00:17:11.106778Z INFO Daemon Daemon Python: 3.11.13 Nov 24 00:17:11.108268 waagent[1798]: 2025-11-24T00:17:11.108206Z INFO Daemon Daemon Run daemon Nov 24 00:17:11.109507 waagent[1798]: 2025-11-24T00:17:11.109478Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.1' Nov 24 00:17:11.113180 waagent[1798]: 2025-11-24T00:17:11.109957Z INFO Daemon Daemon Using waagent for provisioning Nov 24 00:17:11.113180 waagent[1798]: 2025-11-24T00:17:11.111420Z INFO Daemon Daemon Activate resource disk Nov 24 00:17:11.113180 waagent[1798]: 2025-11-24T00:17:11.111592Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 24 00:17:11.120978 waagent[1798]: 2025-11-24T00:17:11.113364Z INFO Daemon Daemon Found device: None Nov 24 00:17:11.120978 waagent[1798]: 2025-11-24T00:17:11.113520Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 24 00:17:11.120978 waagent[1798]: 2025-11-24T00:17:11.113822Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 24 00:17:11.120978 waagent[1798]: 2025-11-24T00:17:11.114653Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:17:11.120978 waagent[1798]: 2025-11-24T00:17:11.114829Z INFO Daemon Daemon Running default provisioning handler Nov 24 00:17:11.122595 waagent[1798]: 2025-11-24T00:17:11.122391Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 24 00:17:11.122993 waagent[1798]: 2025-11-24T00:17:11.122959Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 24 00:17:11.123088 waagent[1798]: 2025-11-24T00:17:11.123066Z INFO Daemon Daemon cloud-init is enabled: False Nov 24 00:17:11.123146 waagent[1798]: 2025-11-24T00:17:11.123127Z INFO Daemon Daemon Copying ovf-env.xml Nov 24 00:17:11.173947 waagent[1798]: 2025-11-24T00:17:11.173305Z INFO Daemon Daemon Successfully mounted dvd Nov 24 00:17:11.198763 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 24 00:17:11.200761 waagent[1798]: 2025-11-24T00:17:11.200714Z INFO Daemon Daemon Detect protocol endpoint Nov 24 00:17:11.204571 waagent[1798]: 2025-11-24T00:17:11.201372Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:17:11.204571 waagent[1798]: 2025-11-24T00:17:11.201722Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 24 00:17:11.204571 waagent[1798]: 2025-11-24T00:17:11.202048Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 24 00:17:11.204571 waagent[1798]: 2025-11-24T00:17:11.202220Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 24 00:17:11.204571 waagent[1798]: 2025-11-24T00:17:11.202762Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 24 00:17:11.218442 waagent[1798]: 2025-11-24T00:17:11.218406Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 24 00:17:11.220485 waagent[1798]: 2025-11-24T00:17:11.219125Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 24 00:17:11.220485 waagent[1798]: 2025-11-24T00:17:11.219419Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 24 00:17:11.298002 waagent[1798]: 2025-11-24T00:17:11.297923Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 24 00:17:11.299723 waagent[1798]: 2025-11-24T00:17:11.299642Z INFO Daemon Daemon Forcing an update of the goal state. Nov 24 00:17:11.304602 waagent[1798]: 2025-11-24T00:17:11.304560Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:17:11.319528 waagent[1798]: 2025-11-24T00:17:11.319497Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Nov 24 00:17:11.322184 waagent[1798]: 2025-11-24T00:17:11.320509Z INFO Daemon Nov 24 00:17:11.322184 waagent[1798]: 2025-11-24T00:17:11.320830Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3a096660-3362-457d-8d5f-58f5ce6cacb0 eTag: 14096432899187256781 source: Fabric] Nov 24 00:17:11.322184 waagent[1798]: 2025-11-24T00:17:11.321111Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 24 00:17:11.322184 waagent[1798]: 2025-11-24T00:17:11.321441Z INFO Daemon Nov 24 00:17:11.322184 waagent[1798]: 2025-11-24T00:17:11.321623Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:17:11.331011 waagent[1798]: 2025-11-24T00:17:11.330980Z INFO Daemon Daemon Downloading artifacts profile blob Nov 24 00:17:11.488726 waagent[1798]: 2025-11-24T00:17:11.488619Z INFO Daemon Downloaded certificate {'thumbprint': 'D5A93B321F9E32BD82F49EF09394E098D0E7CDB4', 'hasPrivateKey': True} Nov 24 00:17:11.492193 waagent[1798]: 2025-11-24T00:17:11.489906Z INFO Daemon Fetch goal state completed Nov 24 00:17:11.539415 waagent[1798]: 2025-11-24T00:17:11.539338Z INFO Daemon Daemon Starting provisioning Nov 24 00:17:11.541035 waagent[1798]: 2025-11-24T00:17:11.539655Z INFO Daemon Daemon Handle ovf-env.xml. Nov 24 00:17:11.541035 waagent[1798]: 2025-11-24T00:17:11.539752Z INFO Daemon Daemon Set hostname [ci-4459.2.1-a-8bf8e53aa8] Nov 24 00:17:11.556150 waagent[1798]: 2025-11-24T00:17:11.556104Z INFO Daemon Daemon Publish hostname [ci-4459.2.1-a-8bf8e53aa8] Nov 24 00:17:11.557604 waagent[1798]: 2025-11-24T00:17:11.556812Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 24 00:17:11.557604 waagent[1798]: 2025-11-24T00:17:11.557385Z INFO Daemon Daemon Primary interface is [eth0] Nov 24 00:17:11.565260 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:17:11.565266 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:17:11.569620 waagent[1798]: 2025-11-24T00:17:11.565781Z INFO Daemon Daemon Create user account if not exists Nov 24 00:17:11.569620 waagent[1798]: 2025-11-24T00:17:11.566505Z INFO Daemon Daemon User core already exists, skip useradd Nov 24 00:17:11.569620 waagent[1798]: 2025-11-24T00:17:11.567039Z INFO Daemon Daemon Configure sudoer Nov 24 00:17:11.565292 systemd-networkd[1340]: eth0: DHCP lease lost Nov 24 00:17:11.571500 waagent[1798]: 2025-11-24T00:17:11.571339Z INFO Daemon Daemon Configure sshd Nov 24 00:17:11.575957 waagent[1798]: 2025-11-24T00:17:11.575915Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 24 00:17:11.581865 waagent[1798]: 2025-11-24T00:17:11.576575Z INFO Daemon Daemon Deploy ssh public key. Nov 24 00:17:11.587225 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:17:11.617423 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:17:11.621998 systemd-logind[1680]: New session 2 of user core. Nov 24 00:17:11.626319 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:17:12.661030 waagent[1798]: 2025-11-24T00:17:12.660975Z INFO Daemon Daemon Provisioning complete Nov 24 00:17:12.674820 waagent[1798]: 2025-11-24T00:17:12.674781Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 24 00:17:12.676478 waagent[1798]: 2025-11-24T00:17:12.676404Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 24 00:17:12.678642 waagent[1798]: 2025-11-24T00:17:12.678610Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 24 00:17:12.783684 waagent[1891]: 2025-11-24T00:17:12.783604Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 24 00:17:12.783987 waagent[1891]: 2025-11-24T00:17:12.783711Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.1 Nov 24 00:17:12.783987 waagent[1891]: 2025-11-24T00:17:12.783752Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 24 00:17:12.783987 waagent[1891]: 2025-11-24T00:17:12.783793Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 24 00:17:12.818277 waagent[1891]: 2025-11-24T00:17:12.818224Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 24 00:17:12.818402 waagent[1891]: 2025-11-24T00:17:12.818375Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:17:12.818452 waagent[1891]: 2025-11-24T00:17:12.818431Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:17:12.823921 waagent[1891]: 2025-11-24T00:17:12.823859Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:17:12.834968 waagent[1891]: 2025-11-24T00:17:12.834934Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Nov 24 00:17:12.835360 waagent[1891]: 2025-11-24T00:17:12.835327Z INFO ExtHandler Nov 24 00:17:12.835405 waagent[1891]: 2025-11-24T00:17:12.835384Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9e4250ca-c42f-4237-b17b-ca9bfceae994 eTag: 14096432899187256781 source: Fabric] Nov 24 00:17:12.835612 waagent[1891]: 2025-11-24T00:17:12.835587Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 24 00:17:12.835952 waagent[1891]: 2025-11-24T00:17:12.835921Z INFO ExtHandler Nov 24 00:17:12.836000 waagent[1891]: 2025-11-24T00:17:12.835966Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:17:12.844369 waagent[1891]: 2025-11-24T00:17:12.844338Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 24 00:17:12.912320 waagent[1891]: 2025-11-24T00:17:12.912238Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5A93B321F9E32BD82F49EF09394E098D0E7CDB4', 'hasPrivateKey': True} Nov 24 00:17:12.912648 waagent[1891]: 2025-11-24T00:17:12.912618Z INFO ExtHandler Fetch goal state completed Nov 24 00:17:12.926380 waagent[1891]: 2025-11-24T00:17:12.926334Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 24 00:17:12.930529 waagent[1891]: 2025-11-24T00:17:12.930481Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1891 Nov 24 00:17:12.930636 waagent[1891]: 2025-11-24T00:17:12.930609Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 24 00:17:12.930886 waagent[1891]: 2025-11-24T00:17:12.930863Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 24 00:17:12.931965 waagent[1891]: 2025-11-24T00:17:12.931931Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 24 00:17:12.932290 waagent[1891]: 2025-11-24T00:17:12.932259Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 24 00:17:12.932410 waagent[1891]: 2025-11-24T00:17:12.932384Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 24 00:17:12.932809 waagent[1891]: 2025-11-24T00:17:12.932780Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 24 00:17:12.979385 waagent[1891]: 2025-11-24T00:17:12.979357Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 24 00:17:12.979525 waagent[1891]: 2025-11-24T00:17:12.979504Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 24 00:17:12.985185 waagent[1891]: 2025-11-24T00:17:12.984838Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 24 00:17:12.989836 systemd[1]: Reload requested from client PID 1906 ('systemctl') (unit waagent.service)... Nov 24 00:17:12.989849 systemd[1]: Reloading... Nov 24 00:17:13.061190 zram_generator::config[1941]: No configuration found. Nov 24 00:17:13.245697 systemd[1]: Reloading finished in 255 ms. Nov 24 00:17:13.261205 waagent[1891]: 2025-11-24T00:17:13.258727Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 24 00:17:13.261205 waagent[1891]: 2025-11-24T00:17:13.258878Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 24 00:17:13.264181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:17:13.540123 waagent[1891]: 2025-11-24T00:17:13.540005Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 24 00:17:13.540393 waagent[1891]: 2025-11-24T00:17:13.540362Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 24 00:17:13.541121 waagent[1891]: 2025-11-24T00:17:13.541082Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 24 00:17:13.541337 waagent[1891]: 2025-11-24T00:17:13.541294Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:17:13.541408 waagent[1891]: 2025-11-24T00:17:13.541379Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:17:13.541612 waagent[1891]: 2025-11-24T00:17:13.541590Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 24 00:17:13.541816 waagent[1891]: 2025-11-24T00:17:13.541793Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 24 00:17:13.542058 waagent[1891]: 2025-11-24T00:17:13.542001Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 24 00:17:13.542090 waagent[1891]: 2025-11-24T00:17:13.542064Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 24 00:17:13.542339 waagent[1891]: 2025-11-24T00:17:13.542293Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 24 00:17:13.542436 waagent[1891]: 2025-11-24T00:17:13.542400Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:17:13.542502 waagent[1891]: 2025-11-24T00:17:13.542471Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:17:13.542614 waagent[1891]: 2025-11-24T00:17:13.542594Z INFO EnvHandler ExtHandler Configure routes Nov 24 00:17:13.542659 waagent[1891]: 2025-11-24T00:17:13.542640Z INFO EnvHandler ExtHandler Gateway:None Nov 24 00:17:13.542700 waagent[1891]: 2025-11-24T00:17:13.542681Z INFO EnvHandler ExtHandler Routes:None Nov 24 00:17:13.542734 waagent[1891]: 2025-11-24T00:17:13.542705Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 24 00:17:13.542983 waagent[1891]: 2025-11-24T00:17:13.542964Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 24 00:17:13.543101 waagent[1891]: 2025-11-24T00:17:13.543084Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 24 00:17:13.543101 waagent[1891]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 24 00:17:13.543101 waagent[1891]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 24 00:17:13.543101 waagent[1891]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 24 00:17:13.543101 waagent[1891]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:17:13.543101 waagent[1891]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:17:13.543101 waagent[1891]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:17:13.553678 waagent[1891]: 2025-11-24T00:17:13.553628Z INFO ExtHandler ExtHandler Nov 24 00:17:13.553752 waagent[1891]: 2025-11-24T00:17:13.553694Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e966a655-b02c-4581-9bd5-7fb440935954 correlation 12745f56-9b23-4ad0-9e6c-9986d94288ab created: 2025-11-24T00:16:22.090935Z] Nov 24 00:17:13.553986 waagent[1891]: 2025-11-24T00:17:13.553961Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 24 00:17:13.554395 waagent[1891]: 2025-11-24T00:17:13.554370Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 24 00:17:13.578505 waagent[1891]: 2025-11-24T00:17:13.578457Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 24 00:17:13.578505 waagent[1891]: Try `iptables -h' or 'iptables --help' for more information.) Nov 24 00:17:13.578847 waagent[1891]: 2025-11-24T00:17:13.578814Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 741DB09C-8ECF-442D-B3B3-94FE1F9A1ADF;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 24 00:17:13.603974 waagent[1891]: 2025-11-24T00:17:13.603925Z INFO MonitorHandler ExtHandler Network interfaces: Nov 24 00:17:13.603974 waagent[1891]: Executing ['ip', '-a', '-o', 'link']: Nov 24 00:17:13.603974 waagent[1891]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 24 00:17:13.603974 waagent[1891]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d5:c2:24 brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 24 00:17:13.603974 waagent[1891]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d5:c2:24 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 24 00:17:13.603974 waagent[1891]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 24 00:17:13.603974 waagent[1891]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 24 00:17:13.603974 waagent[1891]: 2: eth0 inet 10.200.4.12/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 24 00:17:13.603974 waagent[1891]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 24 00:17:13.603974 waagent[1891]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 24 00:17:13.603974 waagent[1891]: 2: eth0 inet6 fe80::20d:3aff:fed5:c224/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 24 00:17:13.668352 waagent[1891]: 2025-11-24T00:17:13.668300Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 24 00:17:13.668352 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.668352 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.668352 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.668352 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.668352 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.668352 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.668352 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:17:13.668352 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:17:13.668352 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:17:13.670974 waagent[1891]: 2025-11-24T00:17:13.670927Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 24 00:17:13.670974 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.670974 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.670974 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.670974 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.670974 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:17:13.670974 waagent[1891]: pkts bytes target prot opt in out source destination Nov 24 00:17:13.670974 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:17:13.670974 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:17:13.670974 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:17:20.754770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:17:20.756183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:21.279204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:21.285368 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:17:21.321345 kubelet[2043]: E1124 00:17:21.321298 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:17:21.324238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:17:21.324380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:17:21.324708 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.8M memory peak. Nov 24 00:17:31.352079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:17:31.353527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:31.864153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:31.872370 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:17:31.909624 kubelet[2057]: E1124 00:17:31.909588 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:17:31.911347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:17:31.911495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:17:31.911851 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108M memory peak. Nov 24 00:17:32.503486 chronyd[1661]: Selected source PHC0 Nov 24 00:17:38.501926 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:17:38.503072 systemd[1]: Started sshd@0-10.200.4.12:22-10.200.16.10:54754.service - OpenSSH per-connection server daemon (10.200.16.10:54754). Nov 24 00:17:39.174922 sshd[2065]: Accepted publickey for core from 10.200.16.10 port 54754 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:39.175985 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:39.180319 systemd-logind[1680]: New session 3 of user core. Nov 24 00:17:39.187304 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:17:39.714897 systemd[1]: Started sshd@1-10.200.4.12:22-10.200.16.10:54770.service - OpenSSH per-connection server daemon (10.200.16.10:54770). Nov 24 00:17:40.311216 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 54770 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:40.312329 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:40.316642 systemd-logind[1680]: New session 4 of user core. Nov 24 00:17:40.326308 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:17:40.731509 sshd[2074]: Connection closed by 10.200.16.10 port 54770 Nov 24 00:17:40.732044 sshd-session[2071]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:40.735255 systemd[1]: sshd@1-10.200.4.12:22-10.200.16.10:54770.service: Deactivated successfully. Nov 24 00:17:40.736718 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:17:40.737469 systemd-logind[1680]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:17:40.738499 systemd-logind[1680]: Removed session 4. Nov 24 00:17:40.849726 systemd[1]: Started sshd@2-10.200.4.12:22-10.200.16.10:54694.service - OpenSSH per-connection server daemon (10.200.16.10:54694). Nov 24 00:17:41.449180 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 54694 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:41.450301 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:41.454757 systemd-logind[1680]: New session 5 of user core. Nov 24 00:17:41.460330 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:17:41.873591 sshd[2083]: Connection closed by 10.200.16.10 port 54694 Nov 24 00:17:41.874112 sshd-session[2080]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:41.877363 systemd[1]: sshd@2-10.200.4.12:22-10.200.16.10:54694.service: Deactivated successfully. Nov 24 00:17:41.878874 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:17:41.879653 systemd-logind[1680]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:17:41.880722 systemd-logind[1680]: Removed session 5. Nov 24 00:17:41.991271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 24 00:17:41.992557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:41.995395 systemd[1]: Started sshd@3-10.200.4.12:22-10.200.16.10:54700.service - OpenSSH per-connection server daemon (10.200.16.10:54700). Nov 24 00:17:42.510259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:42.516377 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:17:42.553438 kubelet[2100]: E1124 00:17:42.553405 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:17:42.555113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:17:42.555261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:17:42.555572 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.8M memory peak. Nov 24 00:17:42.592520 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 54700 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:42.593696 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:42.598316 systemd-logind[1680]: New session 6 of user core. Nov 24 00:17:42.604347 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:17:43.012893 sshd[2107]: Connection closed by 10.200.16.10 port 54700 Nov 24 00:17:43.013485 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:43.016696 systemd[1]: sshd@3-10.200.4.12:22-10.200.16.10:54700.service: Deactivated successfully. Nov 24 00:17:43.018313 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:17:43.018930 systemd-logind[1680]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:17:43.020111 systemd-logind[1680]: Removed session 6. Nov 24 00:17:43.117688 systemd[1]: Started sshd@4-10.200.4.12:22-10.200.16.10:54716.service - OpenSSH per-connection server daemon (10.200.16.10:54716). Nov 24 00:17:43.712411 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 54716 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:43.713448 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:43.717567 systemd-logind[1680]: New session 7 of user core. Nov 24 00:17:43.724335 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:17:44.136459 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:17:44.136684 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:17:44.160979 sudo[2117]: pam_unix(sudo:session): session closed for user root Nov 24 00:17:44.266387 sshd[2116]: Connection closed by 10.200.16.10 port 54716 Nov 24 00:17:44.267100 sshd-session[2113]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:44.270260 systemd[1]: sshd@4-10.200.4.12:22-10.200.16.10:54716.service: Deactivated successfully. Nov 24 00:17:44.271770 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:17:44.273390 systemd-logind[1680]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:17:44.274363 systemd-logind[1680]: Removed session 7. Nov 24 00:17:44.375836 systemd[1]: Started sshd@5-10.200.4.12:22-10.200.16.10:54732.service - OpenSSH per-connection server daemon (10.200.16.10:54732). Nov 24 00:17:44.969872 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 54732 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:44.970984 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:44.975538 systemd-logind[1680]: New session 8 of user core. Nov 24 00:17:44.981305 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:17:45.296112 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:17:45.296373 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:17:45.303712 sudo[2128]: pam_unix(sudo:session): session closed for user root Nov 24 00:17:45.307638 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:17:45.307857 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:17:45.315751 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:17:45.345920 augenrules[2150]: No rules Nov 24 00:17:45.346841 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:17:45.347004 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:17:45.347825 sudo[2127]: pam_unix(sudo:session): session closed for user root Nov 24 00:17:45.450599 sshd[2126]: Connection closed by 10.200.16.10 port 54732 Nov 24 00:17:45.451029 sshd-session[2123]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:45.453599 systemd[1]: sshd@5-10.200.4.12:22-10.200.16.10:54732.service: Deactivated successfully. Nov 24 00:17:45.455030 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:17:45.456357 systemd-logind[1680]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:17:45.457521 systemd-logind[1680]: Removed session 8. Nov 24 00:17:45.571709 systemd[1]: Started sshd@6-10.200.4.12:22-10.200.16.10:54746.service - OpenSSH per-connection server daemon (10.200.16.10:54746). Nov 24 00:17:46.162098 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 54746 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:17:46.163191 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:17:46.167694 systemd-logind[1680]: New session 9 of user core. Nov 24 00:17:46.173326 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:17:46.487682 sudo[2163]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:17:46.487902 sudo[2163]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:17:47.763103 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:17:47.769511 (dockerd)[2181]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:17:48.829128 dockerd[2181]: time="2025-11-24T00:17:48.829073422Z" level=info msg="Starting up" Nov 24 00:17:48.830058 dockerd[2181]: time="2025-11-24T00:17:48.830028009Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:17:48.841672 dockerd[2181]: time="2025-11-24T00:17:48.841631119Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:17:48.932036 dockerd[2181]: time="2025-11-24T00:17:48.931997614Z" level=info msg="Loading containers: start." Nov 24 00:17:48.970186 kernel: Initializing XFRM netlink socket Nov 24 00:17:49.247786 systemd-networkd[1340]: docker0: Link UP Nov 24 00:17:49.266782 dockerd[2181]: time="2025-11-24T00:17:49.266746488Z" level=info msg="Loading containers: done." Nov 24 00:17:49.297418 dockerd[2181]: time="2025-11-24T00:17:49.297356972Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:17:49.297539 dockerd[2181]: time="2025-11-24T00:17:49.297460124Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:17:49.297539 dockerd[2181]: time="2025-11-24T00:17:49.297530663Z" level=info msg="Initializing buildkit" Nov 24 00:17:49.342331 dockerd[2181]: time="2025-11-24T00:17:49.342301933Z" level=info msg="Completed buildkit initialization" Nov 24 00:17:49.349314 dockerd[2181]: time="2025-11-24T00:17:49.349275927Z" level=info msg="Daemon has completed initialization" Nov 24 00:17:49.349873 dockerd[2181]: time="2025-11-24T00:17:49.349788351Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:17:49.349477 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:17:50.354017 containerd[1704]: time="2025-11-24T00:17:50.353982965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 24 00:17:51.166158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95405897.mount: Deactivated successfully. Nov 24 00:17:52.329456 containerd[1704]: time="2025-11-24T00:17:52.329403186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:52.332753 containerd[1704]: time="2025-11-24T00:17:52.332713387Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072191" Nov 24 00:17:52.336067 containerd[1704]: time="2025-11-24T00:17:52.336026068Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:52.341132 containerd[1704]: time="2025-11-24T00:17:52.340958451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:52.341615 containerd[1704]: time="2025-11-24T00:17:52.341591402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.987570217s" Nov 24 00:17:52.341675 containerd[1704]: time="2025-11-24T00:17:52.341661403Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Nov 24 00:17:52.342326 containerd[1704]: time="2025-11-24T00:17:52.342300545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 24 00:17:52.536115 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 24 00:17:52.601893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 24 00:17:52.604390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:53.018362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:53.023458 (kubelet)[2454]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:17:53.057691 kubelet[2454]: E1124 00:17:53.057634 2454 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:17:53.059294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:17:53.059424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:17:53.059772 systemd[1]: kubelet.service: Consumed 128ms CPU time, 107.8M memory peak. Nov 24 00:17:54.028594 containerd[1704]: time="2025-11-24T00:17:54.028537454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:54.031407 containerd[1704]: time="2025-11-24T00:17:54.031321907Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992018" Nov 24 00:17:54.034265 containerd[1704]: time="2025-11-24T00:17:54.034238008Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:54.038740 containerd[1704]: time="2025-11-24T00:17:54.038695444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:54.039553 containerd[1704]: time="2025-11-24T00:17:54.039323180Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.696995418s" Nov 24 00:17:54.039553 containerd[1704]: time="2025-11-24T00:17:54.039352312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Nov 24 00:17:54.040111 containerd[1704]: time="2025-11-24T00:17:54.040075203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 24 00:17:54.595743 update_engine[1684]: I20251124 00:17:54.595680 1684 update_attempter.cc:509] Updating boot flags... Nov 24 00:17:55.230488 containerd[1704]: time="2025-11-24T00:17:55.230442237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:55.233040 containerd[1704]: time="2025-11-24T00:17:55.233007587Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404256" Nov 24 00:17:55.236121 containerd[1704]: time="2025-11-24T00:17:55.236067261Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:55.240069 containerd[1704]: time="2025-11-24T00:17:55.240024697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:55.240948 containerd[1704]: time="2025-11-24T00:17:55.240924906Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.200827234s" Nov 24 00:17:55.241009 containerd[1704]: time="2025-11-24T00:17:55.240952003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Nov 24 00:17:55.241642 containerd[1704]: time="2025-11-24T00:17:55.241611180Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 24 00:18:01.518604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310000788.mount: Deactivated successfully. Nov 24 00:18:01.892955 containerd[1704]: time="2025-11-24T00:18:01.892911201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:01.895637 containerd[1704]: time="2025-11-24T00:18:01.895604063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161431" Nov 24 00:18:01.899051 containerd[1704]: time="2025-11-24T00:18:01.899000441Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:01.902616 containerd[1704]: time="2025-11-24T00:18:01.902574870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:01.903113 containerd[1704]: time="2025-11-24T00:18:01.902885288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 6.661247107s" Nov 24 00:18:01.903113 containerd[1704]: time="2025-11-24T00:18:01.902916395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Nov 24 00:18:01.903362 containerd[1704]: time="2025-11-24T00:18:01.903335980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 24 00:18:02.608573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109097378.mount: Deactivated successfully. Nov 24 00:18:03.101998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 24 00:18:03.103346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:18:03.627215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:03.630401 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:18:03.663576 kubelet[2564]: E1124 00:18:03.663539 2564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:18:03.665055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:18:03.665152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:18:03.665455 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.6M memory peak. Nov 24 00:18:04.033248 containerd[1704]: time="2025-11-24T00:18:04.033124018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:04.036016 containerd[1704]: time="2025-11-24T00:18:04.035973132Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 24 00:18:04.043296 containerd[1704]: time="2025-11-24T00:18:04.043255189Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:04.047786 containerd[1704]: time="2025-11-24T00:18:04.047534369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:04.048265 containerd[1704]: time="2025-11-24T00:18:04.048240284Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.14487452s" Nov 24 00:18:04.048310 containerd[1704]: time="2025-11-24T00:18:04.048275027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 24 00:18:04.048979 containerd[1704]: time="2025-11-24T00:18:04.048954846Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:18:04.662024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390449631.mount: Deactivated successfully. Nov 24 00:18:04.682925 containerd[1704]: time="2025-11-24T00:18:04.682888491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:18:04.685698 containerd[1704]: time="2025-11-24T00:18:04.685664092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 24 00:18:04.688769 containerd[1704]: time="2025-11-24T00:18:04.688727973Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:18:04.693181 containerd[1704]: time="2025-11-24T00:18:04.692724560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:18:04.693508 containerd[1704]: time="2025-11-24T00:18:04.693482719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 644.500151ms" Nov 24 00:18:04.693551 containerd[1704]: time="2025-11-24T00:18:04.693518255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:18:04.694149 containerd[1704]: time="2025-11-24T00:18:04.694114251Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 24 00:18:05.366475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086433323.mount: Deactivated successfully. Nov 24 00:18:07.263818 containerd[1704]: time="2025-11-24T00:18:07.263777908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:07.266515 containerd[1704]: time="2025-11-24T00:18:07.266488242Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 24 00:18:07.269880 containerd[1704]: time="2025-11-24T00:18:07.269814032Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:07.274738 containerd[1704]: time="2025-11-24T00:18:07.274690453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:07.275844 containerd[1704]: time="2025-11-24T00:18:07.275481848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.581328495s" Nov 24 00:18:07.275844 containerd[1704]: time="2025-11-24T00:18:07.275514969Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 24 00:18:09.030098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:09.030273 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.6M memory peak. Nov 24 00:18:09.032283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:18:09.054447 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-9.scope)... Nov 24 00:18:09.054459 systemd[1]: Reloading... Nov 24 00:18:09.148196 zram_generator::config[2708]: No configuration found. Nov 24 00:18:09.378296 systemd[1]: Reloading finished in 323 ms. Nov 24 00:18:09.430508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:18:09.430577 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:18:09.430790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:09.430831 systemd[1]: kubelet.service: Consumed 99ms CPU time, 98.3M memory peak. Nov 24 00:18:09.432456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:18:10.060382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:10.069421 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:18:10.104446 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:18:10.104661 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:18:10.104661 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:18:10.104953 kubelet[2781]: I1124 00:18:10.104925 2781 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:18:10.555495 kubelet[2781]: I1124 00:18:10.555459 2781 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:18:10.555495 kubelet[2781]: I1124 00:18:10.555486 2781 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:18:10.555755 kubelet[2781]: I1124 00:18:10.555741 2781 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:18:10.578972 kubelet[2781]: E1124 00:18:10.578872 2781 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:18:10.580293 kubelet[2781]: I1124 00:18:10.580266 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:18:10.588670 kubelet[2781]: I1124 00:18:10.588650 2781 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:18:10.591518 kubelet[2781]: I1124 00:18:10.591497 2781 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:18:10.592919 kubelet[2781]: I1124 00:18:10.592884 2781 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:18:10.593071 kubelet[2781]: I1124 00:18:10.592915 2781 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-8bf8e53aa8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:18:10.593192 kubelet[2781]: I1124 00:18:10.593077 2781 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:18:10.593192 kubelet[2781]: I1124 00:18:10.593087 2781 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:18:10.593244 kubelet[2781]: I1124 00:18:10.593207 2781 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:18:10.595956 kubelet[2781]: I1124 00:18:10.595941 2781 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:18:10.596014 kubelet[2781]: I1124 00:18:10.595965 2781 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:18:10.596014 kubelet[2781]: I1124 00:18:10.595987 2781 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:18:10.596014 kubelet[2781]: I1124 00:18:10.595999 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:18:10.601938 kubelet[2781]: W1124 00:18:10.601895 2781 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Nov 24 00:18:10.602009 kubelet[2781]: E1124 00:18:10.601950 2781 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:18:10.602254 kubelet[2781]: W1124 00:18:10.602220 2781 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-8bf8e53aa8&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Nov 24 00:18:10.602294 kubelet[2781]: E1124 00:18:10.602262 2781 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-8bf8e53aa8&limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:18:10.602554 kubelet[2781]: I1124 00:18:10.602539 2781 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:18:10.602912 kubelet[2781]: I1124 00:18:10.602888 2781 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:18:10.602951 kubelet[2781]: W1124 00:18:10.602938 2781 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:18:10.605343 kubelet[2781]: I1124 00:18:10.605320 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:18:10.605927 kubelet[2781]: I1124 00:18:10.605913 2781 server.go:1287] "Started kubelet" Nov 24 00:18:10.606221 kubelet[2781]: I1124 00:18:10.606200 2781 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:18:10.606848 kubelet[2781]: I1124 00:18:10.606829 2781 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:18:10.608145 kubelet[2781]: I1124 00:18:10.608114 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:18:10.609133 kubelet[2781]: I1124 00:18:10.608649 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:18:10.609133 kubelet[2781]: I1124 00:18:10.608840 2781 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:18:10.613369 kubelet[2781]: I1124 00:18:10.613349 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:18:10.613516 kubelet[2781]: E1124 00:18:10.613501 2781 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" Nov 24 00:18:10.613590 kubelet[2781]: I1124 00:18:10.613580 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:18:10.617562 kubelet[2781]: I1124 00:18:10.617543 2781 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:18:10.617629 kubelet[2781]: I1124 00:18:10.617596 2781 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:18:10.617712 kubelet[2781]: E1124 00:18:10.617691 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-8bf8e53aa8?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="200ms" Nov 24 00:18:10.618937 kubelet[2781]: E1124 00:18:10.617753 2781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-a-8bf8e53aa8.187ac942496d3650 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-a-8bf8e53aa8,UID:ci-4459.2.1-a-8bf8e53aa8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-a-8bf8e53aa8,},FirstTimestamp:2025-11-24 00:18:10.605889104 +0000 UTC m=+0.532763068,LastTimestamp:2025-11-24 00:18:10.605889104 +0000 UTC m=+0.532763068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-a-8bf8e53aa8,}" Nov 24 00:18:10.620995 kubelet[2781]: I1124 00:18:10.620976 2781 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:18:10.621194 kubelet[2781]: I1124 00:18:10.621156 2781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:18:10.623234 kubelet[2781]: W1124 00:18:10.623193 2781 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Nov 24 00:18:10.623311 kubelet[2781]: E1124 00:18:10.623238 2781 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:18:10.623379 kubelet[2781]: I1124 00:18:10.623363 2781 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:18:10.641023 kubelet[2781]: E1124 00:18:10.640985 2781 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:18:10.644840 kubelet[2781]: I1124 00:18:10.644821 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:18:10.644840 kubelet[2781]: I1124 00:18:10.644832 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:18:10.644932 kubelet[2781]: I1124 00:18:10.644906 2781 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:18:10.651455 kubelet[2781]: I1124 00:18:10.651441 2781 policy_none.go:49] "None policy: Start" Nov 24 00:18:10.651455 kubelet[2781]: I1124 00:18:10.651456 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:18:10.651529 kubelet[2781]: I1124 00:18:10.651466 2781 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:18:10.660253 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:18:10.670667 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:18:10.675620 kubelet[2781]: I1124 00:18:10.675594 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:18:10.677668 kubelet[2781]: I1124 00:18:10.677609 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:18:10.677668 kubelet[2781]: I1124 00:18:10.677630 2781 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:18:10.677668 kubelet[2781]: I1124 00:18:10.677649 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:18:10.678274 kubelet[2781]: I1124 00:18:10.677657 2781 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:18:10.678274 kubelet[2781]: E1124 00:18:10.678246 2781 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:18:10.679801 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:18:10.681610 kubelet[2781]: W1124 00:18:10.681320 2781 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Nov 24 00:18:10.681610 kubelet[2781]: E1124 00:18:10.681350 2781 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.12:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:18:10.682436 kubelet[2781]: I1124 00:18:10.682383 2781 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:18:10.683405 kubelet[2781]: I1124 00:18:10.683385 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:18:10.684114 kubelet[2781]: I1124 00:18:10.683661 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:18:10.684730 kubelet[2781]: I1124 00:18:10.684715 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:18:10.685494 kubelet[2781]: E1124 00:18:10.685432 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:18:10.685494 kubelet[2781]: E1124 00:18:10.685484 2781 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-a-8bf8e53aa8\" not found" Nov 24 00:18:10.785095 kubelet[2781]: I1124 00:18:10.785058 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.785551 kubelet[2781]: E1124 00:18:10.785515 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.786853 systemd[1]: Created slice kubepods-burstable-pode8bb24b16bc10d69ab7cf749c70e6e71.slice - libcontainer container kubepods-burstable-pode8bb24b16bc10d69ab7cf749c70e6e71.slice. Nov 24 00:18:10.803253 kubelet[2781]: E1124 00:18:10.803233 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.805895 systemd[1]: Created slice kubepods-burstable-pod9f90f401a71e613c250e40e225c74c5b.slice - libcontainer container kubepods-burstable-pod9f90f401a71e613c250e40e225c74c5b.slice. Nov 24 00:18:10.815095 kubelet[2781]: E1124 00:18:10.815073 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.817514 systemd[1]: Created slice kubepods-burstable-pod3475ac299afec2755f3686302a3fcd48.slice - libcontainer container kubepods-burstable-pod3475ac299afec2755f3686302a3fcd48.slice. Nov 24 00:18:10.818496 kubelet[2781]: E1124 00:18:10.818235 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-8bf8e53aa8?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="400ms" Nov 24 00:18:10.818697 kubelet[2781]: I1124 00:18:10.818553 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818697 kubelet[2781]: I1124 00:18:10.818580 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818697 kubelet[2781]: I1124 00:18:10.818601 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818697 kubelet[2781]: I1124 00:18:10.818619 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818697 kubelet[2781]: I1124 00:18:10.818638 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818839 kubelet[2781]: I1124 00:18:10.818655 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818839 kubelet[2781]: I1124 00:18:10.818672 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818839 kubelet[2781]: I1124 00:18:10.818690 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.818839 kubelet[2781]: I1124 00:18:10.818714 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8bb24b16bc10d69ab7cf749c70e6e71-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"e8bb24b16bc10d69ab7cf749c70e6e71\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.819551 kubelet[2781]: E1124 00:18:10.819533 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.987459 kubelet[2781]: I1124 00:18:10.987429 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:10.987765 kubelet[2781]: E1124 00:18:10.987742 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:11.105142 containerd[1704]: time="2025-11-24T00:18:11.105105426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-8bf8e53aa8,Uid:e8bb24b16bc10d69ab7cf749c70e6e71,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:11.116705 containerd[1704]: time="2025-11-24T00:18:11.116666647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-8bf8e53aa8,Uid:9f90f401a71e613c250e40e225c74c5b,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:11.120900 containerd[1704]: time="2025-11-24T00:18:11.120853672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8,Uid:3475ac299afec2755f3686302a3fcd48,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:11.173115 containerd[1704]: time="2025-11-24T00:18:11.173052707Z" level=info msg="connecting to shim 672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8" address="unix:///run/containerd/s/97748e562b2fdac561294030a23aa3d8980ed60c42ad0f8516a69fce9cbec9b1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:11.203380 containerd[1704]: time="2025-11-24T00:18:11.203351876Z" level=info msg="connecting to shim 75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c" address="unix:///run/containerd/s/4b07bbd00ae8e8c2a4b11e6824d68a8640baec17bdaa58bbd7b8bf066ccfe27a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:11.211078 containerd[1704]: time="2025-11-24T00:18:11.210987353Z" level=info msg="connecting to shim 912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f" address="unix:///run/containerd/s/1745b53f71094e516a9f7411ddbf838b7719f5c7440fe6899c8075a6d63e598e" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:11.214397 systemd[1]: Started cri-containerd-672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8.scope - libcontainer container 672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8. Nov 24 00:18:11.220642 kubelet[2781]: E1124 00:18:11.220598 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-8bf8e53aa8?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="800ms" Nov 24 00:18:11.242384 systemd[1]: Started cri-containerd-912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f.scope - libcontainer container 912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f. Nov 24 00:18:11.245766 systemd[1]: Started cri-containerd-75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c.scope - libcontainer container 75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c. Nov 24 00:18:11.292501 containerd[1704]: time="2025-11-24T00:18:11.292418788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-8bf8e53aa8,Uid:e8bb24b16bc10d69ab7cf749c70e6e71,Namespace:kube-system,Attempt:0,} returns sandbox id \"672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8\"" Nov 24 00:18:11.296432 containerd[1704]: time="2025-11-24T00:18:11.296130037Z" level=info msg="CreateContainer within sandbox \"672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:18:11.318350 containerd[1704]: time="2025-11-24T00:18:11.318324000Z" level=info msg="Container 5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:11.335369 containerd[1704]: time="2025-11-24T00:18:11.335341987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-8bf8e53aa8,Uid:9f90f401a71e613c250e40e225c74c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c\"" Nov 24 00:18:11.336917 containerd[1704]: time="2025-11-24T00:18:11.336891358Z" level=info msg="CreateContainer within sandbox \"75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:18:11.354676 containerd[1704]: time="2025-11-24T00:18:11.354648187Z" level=info msg="Container bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:11.357933 containerd[1704]: time="2025-11-24T00:18:11.357865047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8,Uid:3475ac299afec2755f3686302a3fcd48,Namespace:kube-system,Attempt:0,} returns sandbox id \"912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f\"" Nov 24 00:18:11.359666 containerd[1704]: time="2025-11-24T00:18:11.359642767Z" level=info msg="CreateContainer within sandbox \"912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:18:11.372795 containerd[1704]: time="2025-11-24T00:18:11.372769255Z" level=info msg="CreateContainer within sandbox \"672e1b53c0b6213c9012e963b646c83dc8eef3cc1295f04ec0ee8a5b4c6582a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040\"" Nov 24 00:18:11.373345 containerd[1704]: time="2025-11-24T00:18:11.373323736Z" level=info msg="StartContainer for \"5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040\"" Nov 24 00:18:11.374113 containerd[1704]: time="2025-11-24T00:18:11.374091115Z" level=info msg="connecting to shim 5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040" address="unix:///run/containerd/s/97748e562b2fdac561294030a23aa3d8980ed60c42ad0f8516a69fce9cbec9b1" protocol=ttrpc version=3 Nov 24 00:18:11.382482 containerd[1704]: time="2025-11-24T00:18:11.382454706Z" level=info msg="CreateContainer within sandbox \"75719d1d2788de709fcdd8195bfe4f9d5fc7293ed4861e4528f6c59d69fdcc3c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888\"" Nov 24 00:18:11.388884 containerd[1704]: time="2025-11-24T00:18:11.388857689Z" level=info msg="StartContainer for \"bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888\"" Nov 24 00:18:11.389586 kubelet[2781]: I1124 00:18:11.389533 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:11.389891 kubelet[2781]: E1124 00:18:11.389872 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:11.391392 systemd[1]: Started cri-containerd-5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040.scope - libcontainer container 5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040. Nov 24 00:18:11.392351 containerd[1704]: time="2025-11-24T00:18:11.392311662Z" level=info msg="connecting to shim bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888" address="unix:///run/containerd/s/4b07bbd00ae8e8c2a4b11e6824d68a8640baec17bdaa58bbd7b8bf066ccfe27a" protocol=ttrpc version=3 Nov 24 00:18:11.406569 containerd[1704]: time="2025-11-24T00:18:11.406456529Z" level=info msg="Container 7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:11.415434 systemd[1]: Started cri-containerd-bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888.scope - libcontainer container bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888. Nov 24 00:18:11.427096 containerd[1704]: time="2025-11-24T00:18:11.427062254Z" level=info msg="CreateContainer within sandbox \"912ddb85b0b2913e94c14666d575d9fa4612d6c8a704ef9cff1f0850c9f33c2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf\"" Nov 24 00:18:11.428596 containerd[1704]: time="2025-11-24T00:18:11.428576815Z" level=info msg="StartContainer for \"7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf\"" Nov 24 00:18:11.431351 containerd[1704]: time="2025-11-24T00:18:11.431216875Z" level=info msg="connecting to shim 7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf" address="unix:///run/containerd/s/1745b53f71094e516a9f7411ddbf838b7719f5c7440fe6899c8075a6d63e598e" protocol=ttrpc version=3 Nov 24 00:18:11.461297 systemd[1]: Started cri-containerd-7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf.scope - libcontainer container 7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf. Nov 24 00:18:11.479367 containerd[1704]: time="2025-11-24T00:18:11.479334508Z" level=info msg="StartContainer for \"5348ed52af448f854fb57ef8cbcb109cf67eea3aca066e6571be0c56bff07040\" returns successfully" Nov 24 00:18:11.498912 containerd[1704]: time="2025-11-24T00:18:11.498883750Z" level=info msg="StartContainer for \"bbdaaf416cde8cd8ea7ee2082dd7101452df25c1bb8693be71a84c11372e4888\" returns successfully" Nov 24 00:18:11.544310 containerd[1704]: time="2025-11-24T00:18:11.544208003Z" level=info msg="StartContainer for \"7a68c70efa6e9015b8db845503bf0eb5e7da520779035bc723382ab7410dc8cf\" returns successfully" Nov 24 00:18:11.688663 kubelet[2781]: E1124 00:18:11.688477 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:11.693097 kubelet[2781]: E1124 00:18:11.693081 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:11.696388 kubelet[2781]: E1124 00:18:11.696356 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:12.192322 kubelet[2781]: I1124 00:18:12.192252 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:12.698338 kubelet[2781]: E1124 00:18:12.698307 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:12.699351 kubelet[2781]: E1124 00:18:12.699334 2781 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.399539 kubelet[2781]: E1124 00:18:13.399496 2781 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-a-8bf8e53aa8\" not found" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.555111 kubelet[2781]: I1124 00:18:13.555068 2781 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.600676 kubelet[2781]: I1124 00:18:13.600644 2781 apiserver.go:52] "Watching apiserver" Nov 24 00:18:13.614747 kubelet[2781]: I1124 00:18:13.614717 2781 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.618185 kubelet[2781]: I1124 00:18:13.617660 2781 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:18:13.619666 kubelet[2781]: E1124 00:18:13.619635 2781 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-8bf8e53aa8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.619666 kubelet[2781]: I1124 00:18:13.619658 2781 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.621372 kubelet[2781]: E1124 00:18:13.621328 2781 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.621372 kubelet[2781]: I1124 00:18:13.621372 2781 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.622912 kubelet[2781]: E1124 00:18:13.622885 2781 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.834329 kubelet[2781]: I1124 00:18:13.834230 2781 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:13.836039 kubelet[2781]: E1124 00:18:13.835979 2781 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:14.083461 kubelet[2781]: I1124 00:18:14.083419 2781 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:14.085282 kubelet[2781]: E1124 00:18:14.085198 2781 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:15.509559 systemd[1]: Reload requested from client PID 3051 ('systemctl') (unit session-9.scope)... Nov 24 00:18:15.509573 systemd[1]: Reloading... Nov 24 00:18:15.599202 zram_generator::config[3098]: No configuration found. Nov 24 00:18:15.789042 systemd[1]: Reloading finished in 279 ms. Nov 24 00:18:15.821942 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:18:15.832894 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:18:15.833108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:15.833158 systemd[1]: kubelet.service: Consumed 849ms CPU time, 131.5M memory peak. Nov 24 00:18:15.834566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:18:16.307455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:18:16.316518 (kubelet)[3165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:18:16.360010 kubelet[3165]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:18:16.360359 kubelet[3165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:18:16.360359 kubelet[3165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:18:16.360359 kubelet[3165]: I1124 00:18:16.360178 3165 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:18:16.367370 kubelet[3165]: I1124 00:18:16.367007 3165 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:18:16.367370 kubelet[3165]: I1124 00:18:16.367044 3165 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:18:16.367370 kubelet[3165]: I1124 00:18:16.367347 3165 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:18:16.369369 kubelet[3165]: I1124 00:18:16.368783 3165 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 00:18:16.371770 kubelet[3165]: I1124 00:18:16.371186 3165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:18:16.374494 kubelet[3165]: I1124 00:18:16.374478 3165 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:18:16.377792 kubelet[3165]: I1124 00:18:16.377061 3165 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:18:16.377792 kubelet[3165]: I1124 00:18:16.377622 3165 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:18:16.377906 kubelet[3165]: I1124 00:18:16.377652 3165 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-8bf8e53aa8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:18:16.378003 kubelet[3165]: I1124 00:18:16.377917 3165 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:18:16.378003 kubelet[3165]: I1124 00:18:16.377927 3165 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:18:16.378003 kubelet[3165]: I1124 00:18:16.377975 3165 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:18:16.379962 kubelet[3165]: I1124 00:18:16.378102 3165 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:18:16.379962 kubelet[3165]: I1124 00:18:16.378121 3165 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:18:16.379962 kubelet[3165]: I1124 00:18:16.378142 3165 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:18:16.379962 kubelet[3165]: I1124 00:18:16.378152 3165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:18:16.379962 kubelet[3165]: I1124 00:18:16.379722 3165 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:18:16.380326 kubelet[3165]: I1124 00:18:16.380137 3165 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:18:16.381154 kubelet[3165]: I1124 00:18:16.381133 3165 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:18:16.381231 kubelet[3165]: I1124 00:18:16.381182 3165 server.go:1287] "Started kubelet" Nov 24 00:18:16.385550 kubelet[3165]: I1124 00:18:16.385526 3165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:18:16.391195 kubelet[3165]: I1124 00:18:16.390561 3165 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:18:16.391497 kubelet[3165]: I1124 00:18:16.391481 3165 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:18:16.395623 kubelet[3165]: I1124 00:18:16.394836 3165 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:18:16.395623 kubelet[3165]: I1124 00:18:16.395007 3165 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:18:16.395623 kubelet[3165]: I1124 00:18:16.395190 3165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:18:16.396851 kubelet[3165]: I1124 00:18:16.396835 3165 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:18:16.397051 kubelet[3165]: E1124 00:18:16.397038 3165 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-8bf8e53aa8\" not found" Nov 24 00:18:16.405744 kubelet[3165]: I1124 00:18:16.405541 3165 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:18:16.405744 kubelet[3165]: I1124 00:18:16.405642 3165 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:18:16.408886 kubelet[3165]: I1124 00:18:16.408629 3165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:18:16.410076 kubelet[3165]: I1124 00:18:16.410053 3165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:18:16.410179 kubelet[3165]: I1124 00:18:16.410081 3165 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:18:16.410179 kubelet[3165]: I1124 00:18:16.410098 3165 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:18:16.410179 kubelet[3165]: I1124 00:18:16.410104 3165 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:18:16.410179 kubelet[3165]: E1124 00:18:16.410141 3165 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:18:16.412055 kubelet[3165]: I1124 00:18:16.412031 3165 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:18:16.419225 kubelet[3165]: E1124 00:18:16.418922 3165 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:18:16.419225 kubelet[3165]: I1124 00:18:16.419073 3165 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:18:16.419225 kubelet[3165]: I1124 00:18:16.419082 3165 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:18:16.460703 kubelet[3165]: I1124 00:18:16.460647 3165 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:18:16.460777 kubelet[3165]: I1124 00:18:16.460771 3165 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:18:16.460818 kubelet[3165]: I1124 00:18:16.460813 3165 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:18:16.460954 kubelet[3165]: I1124 00:18:16.460947 3165 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:18:16.460997 kubelet[3165]: I1124 00:18:16.460986 3165 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:18:16.461023 kubelet[3165]: I1124 00:18:16.461020 3165 policy_none.go:49] "None policy: Start" Nov 24 00:18:16.461052 kubelet[3165]: I1124 00:18:16.461047 3165 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:18:16.461189 kubelet[3165]: I1124 00:18:16.461078 3165 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:18:16.461189 kubelet[3165]: I1124 00:18:16.461148 3165 state_mem.go:75] "Updated machine memory state" Nov 24 00:18:16.464079 kubelet[3165]: I1124 00:18:16.464061 3165 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:18:16.464521 kubelet[3165]: I1124 00:18:16.464381 3165 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:18:16.464521 kubelet[3165]: I1124 00:18:16.464392 3165 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:18:16.465404 kubelet[3165]: I1124 00:18:16.465109 3165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:18:16.467290 kubelet[3165]: E1124 00:18:16.467276 3165 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:18:16.511530 kubelet[3165]: I1124 00:18:16.511512 3165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.511906 kubelet[3165]: I1124 00:18:16.511896 3165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.512175 kubelet[3165]: I1124 00:18:16.511996 3165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.526900 kubelet[3165]: W1124 00:18:16.526885 3165 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 00:18:16.530426 kubelet[3165]: W1124 00:18:16.530413 3165 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 00:18:16.530717 kubelet[3165]: W1124 00:18:16.530536 3165 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 00:18:16.569180 kubelet[3165]: I1124 00:18:16.568373 3165 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.583096 kubelet[3165]: I1124 00:18:16.582683 3165 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.583096 kubelet[3165]: I1124 00:18:16.582737 3165 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.707475 kubelet[3165]: I1124 00:18:16.707447 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.707728 kubelet[3165]: I1124 00:18:16.707677 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.707728 kubelet[3165]: I1124 00:18:16.707705 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.707871 kubelet[3165]: I1124 00:18:16.707820 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f90f401a71e613c250e40e225c74c5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"9f90f401a71e613c250e40e225c74c5b\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.707871 kubelet[3165]: I1124 00:18:16.707843 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.708016 kubelet[3165]: I1124 00:18:16.707960 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.708016 kubelet[3165]: I1124 00:18:16.707981 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.708129 kubelet[3165]: I1124 00:18:16.708001 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3475ac299afec2755f3686302a3fcd48-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"3475ac299afec2755f3686302a3fcd48\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:16.708129 kubelet[3165]: I1124 00:18:16.708104 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8bb24b16bc10d69ab7cf749c70e6e71-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-8bf8e53aa8\" (UID: \"e8bb24b16bc10d69ab7cf749c70e6e71\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:17.379644 kubelet[3165]: I1124 00:18:17.379610 3165 apiserver.go:52] "Watching apiserver" Nov 24 00:18:17.405875 kubelet[3165]: I1124 00:18:17.405837 3165 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:18:17.443792 kubelet[3165]: I1124 00:18:17.443694 3165 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:17.455219 kubelet[3165]: W1124 00:18:17.455187 3165 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 24 00:18:17.455307 kubelet[3165]: E1124 00:18:17.455237 3165 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-8bf8e53aa8\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:17.460487 kubelet[3165]: I1124 00:18:17.460403 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-a-8bf8e53aa8" podStartSLOduration=1.460371686 podStartE2EDuration="1.460371686s" podCreationTimestamp="2025-11-24 00:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:18:17.460141103 +0000 UTC m=+1.139918349" watchObservedRunningTime="2025-11-24 00:18:17.460371686 +0000 UTC m=+1.140148925" Nov 24 00:18:17.469590 kubelet[3165]: I1124 00:18:17.469465 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-8bf8e53aa8" podStartSLOduration=1.46945031 podStartE2EDuration="1.46945031s" podCreationTimestamp="2025-11-24 00:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:18:17.468990683 +0000 UTC m=+1.148767930" watchObservedRunningTime="2025-11-24 00:18:17.46945031 +0000 UTC m=+1.149227552" Nov 24 00:18:17.489837 kubelet[3165]: I1124 00:18:17.489790 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-a-8bf8e53aa8" podStartSLOduration=1.489776319 podStartE2EDuration="1.489776319s" podCreationTimestamp="2025-11-24 00:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:18:17.479667328 +0000 UTC m=+1.159444571" watchObservedRunningTime="2025-11-24 00:18:17.489776319 +0000 UTC m=+1.169553563" Nov 24 00:18:22.340154 kubelet[3165]: I1124 00:18:22.340119 3165 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:18:22.340647 containerd[1704]: time="2025-11-24T00:18:22.340584701Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:18:22.340883 kubelet[3165]: I1124 00:18:22.340831 3165 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:18:22.405159 systemd[1]: Created slice kubepods-besteffort-pod6b928c82_dbc1_4b79_ba95_55e1bf673a34.slice - libcontainer container kubepods-besteffort-pod6b928c82_dbc1_4b79_ba95_55e1bf673a34.slice. Nov 24 00:18:22.435541 kubelet[3165]: I1124 00:18:22.435512 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b928c82-dbc1-4b79-ba95-55e1bf673a34-xtables-lock\") pod \"kube-proxy-gqxfx\" (UID: \"6b928c82-dbc1-4b79-ba95-55e1bf673a34\") " pod="kube-system/kube-proxy-gqxfx" Nov 24 00:18:22.435541 kubelet[3165]: I1124 00:18:22.435546 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqn5\" (UniqueName: \"kubernetes.io/projected/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-api-access-lrqn5\") pod \"kube-proxy-gqxfx\" (UID: \"6b928c82-dbc1-4b79-ba95-55e1bf673a34\") " pod="kube-system/kube-proxy-gqxfx" Nov 24 00:18:22.435735 kubelet[3165]: I1124 00:18:22.435568 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-proxy\") pod \"kube-proxy-gqxfx\" (UID: \"6b928c82-dbc1-4b79-ba95-55e1bf673a34\") " pod="kube-system/kube-proxy-gqxfx" Nov 24 00:18:22.435735 kubelet[3165]: I1124 00:18:22.435584 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b928c82-dbc1-4b79-ba95-55e1bf673a34-lib-modules\") pod \"kube-proxy-gqxfx\" (UID: \"6b928c82-dbc1-4b79-ba95-55e1bf673a34\") " pod="kube-system/kube-proxy-gqxfx" Nov 24 00:18:22.540554 kubelet[3165]: E1124 00:18:22.540521 3165 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:18:22.540554 kubelet[3165]: E1124 00:18:22.540550 3165 projected.go:194] Error preparing data for projected volume kube-api-access-lrqn5 for pod kube-system/kube-proxy-gqxfx: configmap "kube-root-ca.crt" not found Nov 24 00:18:22.540731 kubelet[3165]: E1124 00:18:22.540610 3165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-api-access-lrqn5 podName:6b928c82-dbc1-4b79-ba95-55e1bf673a34 nodeName:}" failed. No retries permitted until 2025-11-24 00:18:23.040588141 +0000 UTC m=+6.720365373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lrqn5" (UniqueName: "kubernetes.io/projected/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-api-access-lrqn5") pod "kube-proxy-gqxfx" (UID: "6b928c82-dbc1-4b79-ba95-55e1bf673a34") : configmap "kube-root-ca.crt" not found Nov 24 00:18:23.140668 kubelet[3165]: E1124 00:18:23.140628 3165 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:18:23.140668 kubelet[3165]: E1124 00:18:23.140661 3165 projected.go:194] Error preparing data for projected volume kube-api-access-lrqn5 for pod kube-system/kube-proxy-gqxfx: configmap "kube-root-ca.crt" not found Nov 24 00:18:23.140840 kubelet[3165]: E1124 00:18:23.140710 3165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-api-access-lrqn5 podName:6b928c82-dbc1-4b79-ba95-55e1bf673a34 nodeName:}" failed. No retries permitted until 2025-11-24 00:18:24.140692166 +0000 UTC m=+7.820469412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lrqn5" (UniqueName: "kubernetes.io/projected/6b928c82-dbc1-4b79-ba95-55e1bf673a34-kube-api-access-lrqn5") pod "kube-proxy-gqxfx" (UID: "6b928c82-dbc1-4b79-ba95-55e1bf673a34") : configmap "kube-root-ca.crt" not found Nov 24 00:18:23.441518 systemd[1]: Created slice kubepods-besteffort-pod73210790_5926_447d_98a0_d56bfe2d37d6.slice - libcontainer container kubepods-besteffort-pod73210790_5926_447d_98a0_d56bfe2d37d6.slice. Nov 24 00:18:23.444962 kubelet[3165]: I1124 00:18:23.444854 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hts\" (UniqueName: \"kubernetes.io/projected/73210790-5926-447d-98a0-d56bfe2d37d6-kube-api-access-z9hts\") pod \"tigera-operator-7dcd859c48-tndq7\" (UID: \"73210790-5926-447d-98a0-d56bfe2d37d6\") " pod="tigera-operator/tigera-operator-7dcd859c48-tndq7" Nov 24 00:18:23.446356 kubelet[3165]: I1124 00:18:23.446247 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73210790-5926-447d-98a0-d56bfe2d37d6-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tndq7\" (UID: \"73210790-5926-447d-98a0-d56bfe2d37d6\") " pod="tigera-operator/tigera-operator-7dcd859c48-tndq7" Nov 24 00:18:23.748258 containerd[1704]: time="2025-11-24T00:18:23.748141514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tndq7,Uid:73210790-5926-447d-98a0-d56bfe2d37d6,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:18:23.869583 containerd[1704]: time="2025-11-24T00:18:23.869542223Z" level=info msg="connecting to shim 5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d" address="unix:///run/containerd/s/d50bfa079188202a042242ea3dd4955a7fe8df8a731d38e0c1e02afb9d6899d9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:23.892498 systemd[1]: Started cri-containerd-5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d.scope - libcontainer container 5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d. Nov 24 00:18:23.934115 containerd[1704]: time="2025-11-24T00:18:23.934078384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tndq7,Uid:73210790-5926-447d-98a0-d56bfe2d37d6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d\"" Nov 24 00:18:23.936211 containerd[1704]: time="2025-11-24T00:18:23.936183766Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:18:24.212588 containerd[1704]: time="2025-11-24T00:18:24.212550936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqxfx,Uid:6b928c82-dbc1-4b79-ba95-55e1bf673a34,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:24.271495 containerd[1704]: time="2025-11-24T00:18:24.271404397Z" level=info msg="connecting to shim 3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e" address="unix:///run/containerd/s/d33c212896c9fba799fa9b77ff3fdaebd908a586b06ab6dcc557cea6692d696e" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:24.290420 systemd[1]: Started cri-containerd-3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e.scope - libcontainer container 3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e. Nov 24 00:18:24.313732 containerd[1704]: time="2025-11-24T00:18:24.313703286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqxfx,Uid:6b928c82-dbc1-4b79-ba95-55e1bf673a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e\"" Nov 24 00:18:24.316818 containerd[1704]: time="2025-11-24T00:18:24.316787407Z" level=info msg="CreateContainer within sandbox \"3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:18:24.335924 containerd[1704]: time="2025-11-24T00:18:24.335896595Z" level=info msg="Container 83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:24.353259 containerd[1704]: time="2025-11-24T00:18:24.353234881Z" level=info msg="CreateContainer within sandbox \"3bdd9de50f3d1045dab2e24976a40a51b9ed39723465e388f31928396a7e796e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869\"" Nov 24 00:18:24.353923 containerd[1704]: time="2025-11-24T00:18:24.353850446Z" level=info msg="StartContainer for \"83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869\"" Nov 24 00:18:24.355794 containerd[1704]: time="2025-11-24T00:18:24.355675386Z" level=info msg="connecting to shim 83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869" address="unix:///run/containerd/s/d33c212896c9fba799fa9b77ff3fdaebd908a586b06ab6dcc557cea6692d696e" protocol=ttrpc version=3 Nov 24 00:18:24.376330 systemd[1]: Started cri-containerd-83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869.scope - libcontainer container 83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869. Nov 24 00:18:24.430978 containerd[1704]: time="2025-11-24T00:18:24.430947325Z" level=info msg="StartContainer for \"83fd98665e00779a7f0d392f22d48adb2cc01c059e375611173a5fdfbb86b869\" returns successfully" Nov 24 00:18:24.470552 kubelet[3165]: I1124 00:18:24.470433 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqxfx" podStartSLOduration=2.470416996 podStartE2EDuration="2.470416996s" podCreationTimestamp="2025-11-24 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:18:24.470395267 +0000 UTC m=+8.150172534" watchObservedRunningTime="2025-11-24 00:18:24.470416996 +0000 UTC m=+8.150194252" Nov 24 00:18:25.270888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123450241.mount: Deactivated successfully. Nov 24 00:18:25.905068 containerd[1704]: time="2025-11-24T00:18:25.905024923Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:25.907900 containerd[1704]: time="2025-11-24T00:18:25.907872877Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:18:25.910797 containerd[1704]: time="2025-11-24T00:18:25.910757568Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:25.914628 containerd[1704]: time="2025-11-24T00:18:25.914588618Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:25.915067 containerd[1704]: time="2025-11-24T00:18:25.914942391Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.978727092s" Nov 24 00:18:25.915067 containerd[1704]: time="2025-11-24T00:18:25.914966703Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:18:25.917331 containerd[1704]: time="2025-11-24T00:18:25.917303639Z" level=info msg="CreateContainer within sandbox \"5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:18:25.938112 containerd[1704]: time="2025-11-24T00:18:25.937549918Z" level=info msg="Container 9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:25.953102 containerd[1704]: time="2025-11-24T00:18:25.953075849Z" level=info msg="CreateContainer within sandbox \"5df9662739e1cc7578c31df88a94721d8051e721c8685c4385f95e3ea059f70d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33\"" Nov 24 00:18:25.953580 containerd[1704]: time="2025-11-24T00:18:25.953511409Z" level=info msg="StartContainer for \"9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33\"" Nov 24 00:18:25.954557 containerd[1704]: time="2025-11-24T00:18:25.954534173Z" level=info msg="connecting to shim 9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33" address="unix:///run/containerd/s/d50bfa079188202a042242ea3dd4955a7fe8df8a731d38e0c1e02afb9d6899d9" protocol=ttrpc version=3 Nov 24 00:18:25.975449 systemd[1]: Started cri-containerd-9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33.scope - libcontainer container 9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33. Nov 24 00:18:26.001033 containerd[1704]: time="2025-11-24T00:18:26.001005275Z" level=info msg="StartContainer for \"9a139822ae8ad8d02a9dde267b525a629ffe0e8805baaa56b6c2f52da2eb1b33\" returns successfully" Nov 24 00:18:26.498183 kubelet[3165]: I1124 00:18:26.498062 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tndq7" podStartSLOduration=1.517504953 podStartE2EDuration="3.497881911s" podCreationTimestamp="2025-11-24 00:18:23 +0000 UTC" firstStartedPulling="2025-11-24 00:18:23.9352477 +0000 UTC m=+7.615024935" lastFinishedPulling="2025-11-24 00:18:25.915624664 +0000 UTC m=+9.595401893" observedRunningTime="2025-11-24 00:18:26.479388506 +0000 UTC m=+10.159165751" watchObservedRunningTime="2025-11-24 00:18:26.497881911 +0000 UTC m=+10.177659154" Nov 24 00:18:31.974625 sudo[2163]: pam_unix(sudo:session): session closed for user root Nov 24 00:18:32.087552 sshd[2162]: Connection closed by 10.200.16.10 port 54746 Nov 24 00:18:32.088435 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Nov 24 00:18:32.094766 systemd[1]: sshd@6-10.200.4.12:22-10.200.16.10:54746.service: Deactivated successfully. Nov 24 00:18:32.097952 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:18:32.098578 systemd[1]: session-9.scope: Consumed 3.038s CPU time, 224.1M memory peak. Nov 24 00:18:32.100251 systemd-logind[1680]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:18:32.103996 systemd-logind[1680]: Removed session 9. Nov 24 00:18:38.206665 systemd[1]: Created slice kubepods-besteffort-pod9d60b69d_c951_4008_be82_183473d9aa22.slice - libcontainer container kubepods-besteffort-pod9d60b69d_c951_4008_be82_183473d9aa22.slice. Nov 24 00:18:38.236063 kubelet[3165]: I1124 00:18:38.236030 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d60b69d-c951-4008-be82-183473d9aa22-tigera-ca-bundle\") pod \"calico-typha-85db75764c-t8v2l\" (UID: \"9d60b69d-c951-4008-be82-183473d9aa22\") " pod="calico-system/calico-typha-85db75764c-t8v2l" Nov 24 00:18:38.236385 kubelet[3165]: I1124 00:18:38.236069 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d60b69d-c951-4008-be82-183473d9aa22-typha-certs\") pod \"calico-typha-85db75764c-t8v2l\" (UID: \"9d60b69d-c951-4008-be82-183473d9aa22\") " pod="calico-system/calico-typha-85db75764c-t8v2l" Nov 24 00:18:38.236385 kubelet[3165]: I1124 00:18:38.236089 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgmr\" (UniqueName: \"kubernetes.io/projected/9d60b69d-c951-4008-be82-183473d9aa22-kube-api-access-sqgmr\") pod \"calico-typha-85db75764c-t8v2l\" (UID: \"9d60b69d-c951-4008-be82-183473d9aa22\") " pod="calico-system/calico-typha-85db75764c-t8v2l" Nov 24 00:18:38.381813 systemd[1]: Created slice kubepods-besteffort-pod47a78985_e956_4491_a9df_ab2bd5e3ebed.slice - libcontainer container kubepods-besteffort-pod47a78985_e956_4491_a9df_ab2bd5e3ebed.slice. Nov 24 00:18:38.437968 kubelet[3165]: I1124 00:18:38.437938 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-cni-log-dir\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.437968 kubelet[3165]: I1124 00:18:38.437969 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-lib-modules\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438236 kubelet[3165]: I1124 00:18:38.437985 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-var-lib-calico\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438236 kubelet[3165]: I1124 00:18:38.438005 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-cni-bin-dir\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438236 kubelet[3165]: I1124 00:18:38.438022 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-policysync\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438236 kubelet[3165]: I1124 00:18:38.438039 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m9xw\" (UniqueName: \"kubernetes.io/projected/47a78985-e956-4491-a9df-ab2bd5e3ebed-kube-api-access-6m9xw\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438236 kubelet[3165]: I1124 00:18:38.438057 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47a78985-e956-4491-a9df-ab2bd5e3ebed-node-certs\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438355 kubelet[3165]: I1124 00:18:38.438075 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47a78985-e956-4491-a9df-ab2bd5e3ebed-tigera-ca-bundle\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438355 kubelet[3165]: I1124 00:18:38.438107 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-cni-net-dir\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438355 kubelet[3165]: I1124 00:18:38.438132 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-flexvol-driver-host\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438355 kubelet[3165]: I1124 00:18:38.438203 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-var-run-calico\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.438355 kubelet[3165]: I1124 00:18:38.438254 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a78985-e956-4491-a9df-ab2bd5e3ebed-xtables-lock\") pod \"calico-node-xltbc\" (UID: \"47a78985-e956-4491-a9df-ab2bd5e3ebed\") " pod="calico-system/calico-node-xltbc" Nov 24 00:18:38.514695 containerd[1704]: time="2025-11-24T00:18:38.514605855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85db75764c-t8v2l,Uid:9d60b69d-c951-4008-be82-183473d9aa22,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:38.542340 kubelet[3165]: E1124 00:18:38.542194 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.542340 kubelet[3165]: W1124 00:18:38.542214 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.542340 kubelet[3165]: E1124 00:18:38.542240 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.547595 kubelet[3165]: E1124 00:18:38.547575 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.547595 kubelet[3165]: W1124 00:18:38.547592 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.547713 kubelet[3165]: E1124 00:18:38.547606 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.565075 kubelet[3165]: E1124 00:18:38.565056 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.565696 kubelet[3165]: W1124 00:18:38.565609 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.565696 kubelet[3165]: E1124 00:18:38.565635 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.570059 containerd[1704]: time="2025-11-24T00:18:38.569986565Z" level=info msg="connecting to shim c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d" address="unix:///run/containerd/s/a321435182f49dfce3d504c96f59ba00e9fd044dc5363c9a989047f147c74e5d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:38.591439 kubelet[3165]: E1124 00:18:38.591383 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:38.598326 systemd[1]: Started cri-containerd-c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d.scope - libcontainer container c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d. Nov 24 00:18:38.626430 kubelet[3165]: E1124 00:18:38.626399 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.626628 kubelet[3165]: W1124 00:18:38.626416 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.626628 kubelet[3165]: E1124 00:18:38.626575 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.627303 kubelet[3165]: E1124 00:18:38.627290 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.627443 kubelet[3165]: W1124 00:18:38.627358 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.627443 kubelet[3165]: E1124 00:18:38.627376 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.627770 kubelet[3165]: E1124 00:18:38.627723 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.627770 kubelet[3165]: W1124 00:18:38.627736 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.627770 kubelet[3165]: E1124 00:18:38.627750 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.628481 kubelet[3165]: E1124 00:18:38.628404 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.628481 kubelet[3165]: W1124 00:18:38.628417 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.628481 kubelet[3165]: E1124 00:18:38.628431 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630190 kubelet[3165]: E1124 00:18:38.628806 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630190 kubelet[3165]: W1124 00:18:38.628819 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630190 kubelet[3165]: E1124 00:18:38.628832 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630388 kubelet[3165]: E1124 00:18:38.630375 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630417 kubelet[3165]: W1124 00:18:38.630391 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630417 kubelet[3165]: E1124 00:18:38.630412 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630548 kubelet[3165]: E1124 00:18:38.630538 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630583 kubelet[3165]: W1124 00:18:38.630562 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630583 kubelet[3165]: E1124 00:18:38.630572 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630698 kubelet[3165]: E1124 00:18:38.630688 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630740 kubelet[3165]: W1124 00:18:38.630698 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630740 kubelet[3165]: E1124 00:18:38.630706 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630843 kubelet[3165]: E1124 00:18:38.630835 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630883 kubelet[3165]: W1124 00:18:38.630846 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630883 kubelet[3165]: E1124 00:18:38.630855 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.630969 kubelet[3165]: E1124 00:18:38.630962 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.630997 kubelet[3165]: W1124 00:18:38.630969 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.630997 kubelet[3165]: E1124 00:18:38.630976 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631133 kubelet[3165]: E1124 00:18:38.631125 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631189 kubelet[3165]: W1124 00:18:38.631134 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631189 kubelet[3165]: E1124 00:18:38.631141 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631293 kubelet[3165]: E1124 00:18:38.631284 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631321 kubelet[3165]: W1124 00:18:38.631292 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631321 kubelet[3165]: E1124 00:18:38.631299 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631405 kubelet[3165]: E1124 00:18:38.631397 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631438 kubelet[3165]: W1124 00:18:38.631405 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631438 kubelet[3165]: E1124 00:18:38.631412 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631505 kubelet[3165]: E1124 00:18:38.631498 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631527 kubelet[3165]: W1124 00:18:38.631506 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631527 kubelet[3165]: E1124 00:18:38.631512 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631628 kubelet[3165]: E1124 00:18:38.631600 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631628 kubelet[3165]: W1124 00:18:38.631605 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631628 kubelet[3165]: E1124 00:18:38.631612 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631717 kubelet[3165]: E1124 00:18:38.631709 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631742 kubelet[3165]: W1124 00:18:38.631717 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631742 kubelet[3165]: E1124 00:18:38.631723 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631827 kubelet[3165]: E1124 00:18:38.631819 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631868 kubelet[3165]: W1124 00:18:38.631827 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631868 kubelet[3165]: E1124 00:18:38.631833 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.631925 kubelet[3165]: E1124 00:18:38.631919 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.631962 kubelet[3165]: W1124 00:18:38.631927 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.631962 kubelet[3165]: E1124 00:18:38.631933 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.632036 kubelet[3165]: E1124 00:18:38.632020 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.632036 kubelet[3165]: W1124 00:18:38.632027 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.632036 kubelet[3165]: E1124 00:18:38.632033 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.632125 kubelet[3165]: E1124 00:18:38.632118 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.632147 kubelet[3165]: W1124 00:18:38.632127 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.632147 kubelet[3165]: E1124 00:18:38.632133 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.640566 kubelet[3165]: E1124 00:18:38.640517 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.640566 kubelet[3165]: W1124 00:18:38.640530 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.640566 kubelet[3165]: E1124 00:18:38.640543 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.640690 kubelet[3165]: I1124 00:18:38.640591 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/377ffa75-e56f-4a86-9355-a323312d6a89-varrun\") pod \"csi-node-driver-z6pwc\" (UID: \"377ffa75-e56f-4a86-9355-a323312d6a89\") " pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:38.641046 kubelet[3165]: E1124 00:18:38.640719 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.641046 kubelet[3165]: W1124 00:18:38.640779 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.641046 kubelet[3165]: E1124 00:18:38.640790 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.641046 kubelet[3165]: I1124 00:18:38.640807 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/377ffa75-e56f-4a86-9355-a323312d6a89-kubelet-dir\") pod \"csi-node-driver-z6pwc\" (UID: \"377ffa75-e56f-4a86-9355-a323312d6a89\") " pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:38.641222 kubelet[3165]: E1124 00:18:38.641194 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.641222 kubelet[3165]: W1124 00:18:38.641206 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.641376 kubelet[3165]: E1124 00:18:38.641294 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.641592 kubelet[3165]: E1124 00:18:38.641442 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.641655 kubelet[3165]: W1124 00:18:38.641627 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.641655 kubelet[3165]: E1124 00:18:38.641653 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.641817 kubelet[3165]: E1124 00:18:38.641810 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.641849 kubelet[3165]: W1124 00:18:38.641818 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.642068 kubelet[3165]: E1124 00:18:38.642054 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.642099 kubelet[3165]: I1124 00:18:38.642084 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blml4\" (UniqueName: \"kubernetes.io/projected/377ffa75-e56f-4a86-9355-a323312d6a89-kube-api-access-blml4\") pod \"csi-node-driver-z6pwc\" (UID: \"377ffa75-e56f-4a86-9355-a323312d6a89\") " pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:38.642416 kubelet[3165]: E1124 00:18:38.642383 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.642416 kubelet[3165]: W1124 00:18:38.642402 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.642507 kubelet[3165]: E1124 00:18:38.642421 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.644563 kubelet[3165]: E1124 00:18:38.644544 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.644563 kubelet[3165]: W1124 00:18:38.644562 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.644744 kubelet[3165]: E1124 00:18:38.644580 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.644842 kubelet[3165]: E1124 00:18:38.644831 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.644890 kubelet[3165]: W1124 00:18:38.644843 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.644890 kubelet[3165]: E1124 00:18:38.644854 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.644890 kubelet[3165]: I1124 00:18:38.644875 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/377ffa75-e56f-4a86-9355-a323312d6a89-socket-dir\") pod \"csi-node-driver-z6pwc\" (UID: \"377ffa75-e56f-4a86-9355-a323312d6a89\") " pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:38.645275 kubelet[3165]: E1124 00:18:38.645263 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.645275 kubelet[3165]: W1124 00:18:38.645274 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.645443 kubelet[3165]: E1124 00:18:38.645285 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.645443 kubelet[3165]: I1124 00:18:38.645303 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/377ffa75-e56f-4a86-9355-a323312d6a89-registration-dir\") pod \"csi-node-driver-z6pwc\" (UID: \"377ffa75-e56f-4a86-9355-a323312d6a89\") " pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:38.645576 kubelet[3165]: E1124 00:18:38.645563 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.645609 kubelet[3165]: W1124 00:18:38.645576 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.646282 kubelet[3165]: E1124 00:18:38.646262 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.646384 kubelet[3165]: E1124 00:18:38.646374 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.646421 kubelet[3165]: W1124 00:18:38.646386 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.646541 kubelet[3165]: E1124 00:18:38.646465 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.646541 kubelet[3165]: E1124 00:18:38.646520 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.646541 kubelet[3165]: W1124 00:18:38.646525 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.646618 kubelet[3165]: E1124 00:18:38.646598 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.646728 kubelet[3165]: E1124 00:18:38.646650 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.646728 kubelet[3165]: W1124 00:18:38.646657 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.646728 kubelet[3165]: E1124 00:18:38.646663 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.646822 kubelet[3165]: E1124 00:18:38.646787 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.646822 kubelet[3165]: W1124 00:18:38.646793 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.646822 kubelet[3165]: E1124 00:18:38.646799 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.646981 kubelet[3165]: E1124 00:18:38.646963 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.647023 kubelet[3165]: W1124 00:18:38.646975 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.647023 kubelet[3165]: E1124 00:18:38.647019 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.654864 containerd[1704]: time="2025-11-24T00:18:38.654831944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85db75764c-t8v2l,Uid:9d60b69d-c951-4008-be82-183473d9aa22,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d\"" Nov 24 00:18:38.656886 containerd[1704]: time="2025-11-24T00:18:38.656434876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:18:38.685660 containerd[1704]: time="2025-11-24T00:18:38.685630110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xltbc,Uid:47a78985-e956-4491-a9df-ab2bd5e3ebed,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:38.736534 containerd[1704]: time="2025-11-24T00:18:38.736419149Z" level=info msg="connecting to shim 9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc" address="unix:///run/containerd/s/0165d5902903db9b4f8030b6d5331272f2797acaabc9b3bca82a162113b3bc1b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:38.745952 kubelet[3165]: E1124 00:18:38.745932 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.745952 kubelet[3165]: W1124 00:18:38.745952 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.746059 kubelet[3165]: E1124 00:18:38.745969 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.746327 kubelet[3165]: E1124 00:18:38.746314 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.746367 kubelet[3165]: W1124 00:18:38.746327 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.746367 kubelet[3165]: E1124 00:18:38.746346 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.746556 kubelet[3165]: E1124 00:18:38.746514 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.746556 kubelet[3165]: W1124 00:18:38.746540 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.746556 kubelet[3165]: E1124 00:18:38.746552 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.746688 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747268 kubelet[3165]: W1124 00:18:38.746733 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.746759 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.746883 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747268 kubelet[3165]: W1124 00:18:38.746889 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.746896 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.747030 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747268 kubelet[3165]: W1124 00:18:38.747036 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.747045 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747268 kubelet[3165]: E1124 00:18:38.747190 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747506 kubelet[3165]: W1124 00:18:38.747196 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747506 kubelet[3165]: E1124 00:18:38.747203 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747506 kubelet[3165]: E1124 00:18:38.747318 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747506 kubelet[3165]: W1124 00:18:38.747324 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747506 kubelet[3165]: E1124 00:18:38.747330 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747622 kubelet[3165]: E1124 00:18:38.747559 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747622 kubelet[3165]: W1124 00:18:38.747565 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747622 kubelet[3165]: E1124 00:18:38.747571 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.747692 kubelet[3165]: E1124 00:18:38.747678 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.747692 kubelet[3165]: W1124 00:18:38.747683 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.747739 kubelet[3165]: E1124 00:18:38.747700 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.747789 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.748330 kubelet[3165]: W1124 00:18:38.747797 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.747803 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.747893 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.748330 kubelet[3165]: W1124 00:18:38.747900 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.747908 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.748061 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.748330 kubelet[3165]: W1124 00:18:38.748068 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.748083 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.748330 kubelet[3165]: E1124 00:18:38.748266 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749153 kubelet[3165]: W1124 00:18:38.748287 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748297 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748400 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749153 kubelet[3165]: W1124 00:18:38.748406 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748413 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748523 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749153 kubelet[3165]: W1124 00:18:38.748529 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748622 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749153 kubelet[3165]: W1124 00:18:38.748627 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749153 kubelet[3165]: E1124 00:18:38.748710 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749391 kubelet[3165]: W1124 00:18:38.748715 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748723 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748619 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748800 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748829 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749391 kubelet[3165]: W1124 00:18:38.748834 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748848 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748962 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.749391 kubelet[3165]: W1124 00:18:38.748968 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.749391 kubelet[3165]: E1124 00:18:38.748981 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749126 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.751838 kubelet[3165]: W1124 00:18:38.749131 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749151 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749508 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.751838 kubelet[3165]: W1124 00:18:38.749515 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749524 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749640 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.751838 kubelet[3165]: W1124 00:18:38.749645 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749651 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.751838 kubelet[3165]: E1124 00:18:38.749748 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.752257 kubelet[3165]: W1124 00:18:38.749753 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.752257 kubelet[3165]: E1124 00:18:38.749760 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.752257 kubelet[3165]: E1124 00:18:38.750138 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.752257 kubelet[3165]: W1124 00:18:38.750150 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.752257 kubelet[3165]: E1124 00:18:38.750194 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.759884 kubelet[3165]: E1124 00:18:38.759845 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:38.759884 kubelet[3165]: W1124 00:18:38.759860 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:38.759884 kubelet[3165]: E1124 00:18:38.759872 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:38.767325 systemd[1]: Started cri-containerd-9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc.scope - libcontainer container 9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc. Nov 24 00:18:38.796094 containerd[1704]: time="2025-11-24T00:18:38.796074352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xltbc,Uid:47a78985-e956-4491-a9df-ab2bd5e3ebed,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\"" Nov 24 00:18:39.792593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567443437.mount: Deactivated successfully. Nov 24 00:18:40.429694 kubelet[3165]: E1124 00:18:40.428715 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:40.601953 containerd[1704]: time="2025-11-24T00:18:40.601911054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:40.604679 containerd[1704]: time="2025-11-24T00:18:40.604648013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:18:40.607667 containerd[1704]: time="2025-11-24T00:18:40.607625755Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:40.611447 containerd[1704]: time="2025-11-24T00:18:40.611400487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:40.611971 containerd[1704]: time="2025-11-24T00:18:40.611727626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.955262319s" Nov 24 00:18:40.611971 containerd[1704]: time="2025-11-24T00:18:40.611755282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:18:40.613185 containerd[1704]: time="2025-11-24T00:18:40.613137695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:18:40.625648 containerd[1704]: time="2025-11-24T00:18:40.625500188Z" level=info msg="CreateContainer within sandbox \"c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:18:40.646792 containerd[1704]: time="2025-11-24T00:18:40.645893387Z" level=info msg="Container 845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:40.662201 containerd[1704]: time="2025-11-24T00:18:40.662156297Z" level=info msg="CreateContainer within sandbox \"c8ba766a4ba0708472772ea2adf80cdc4d7162ec3d9856f73ec10890326f1e0d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af\"" Nov 24 00:18:40.662798 containerd[1704]: time="2025-11-24T00:18:40.662779554Z" level=info msg="StartContainer for \"845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af\"" Nov 24 00:18:40.663928 containerd[1704]: time="2025-11-24T00:18:40.663904361Z" level=info msg="connecting to shim 845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af" address="unix:///run/containerd/s/a321435182f49dfce3d504c96f59ba00e9fd044dc5363c9a989047f147c74e5d" protocol=ttrpc version=3 Nov 24 00:18:40.688309 systemd[1]: Started cri-containerd-845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af.scope - libcontainer container 845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af. Nov 24 00:18:40.735239 containerd[1704]: time="2025-11-24T00:18:40.735211499Z" level=info msg="StartContainer for \"845bad5ceb1ca51b7b5f08fd8ebdb7d6f4a4f102f4f724e26fc780d0583317af\" returns successfully" Nov 24 00:18:41.504470 kubelet[3165]: I1124 00:18:41.503669 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85db75764c-t8v2l" podStartSLOduration=1.546974436 podStartE2EDuration="3.503652727s" podCreationTimestamp="2025-11-24 00:18:38 +0000 UTC" firstStartedPulling="2025-11-24 00:18:38.655896637 +0000 UTC m=+22.335673871" lastFinishedPulling="2025-11-24 00:18:40.612574925 +0000 UTC m=+24.292352162" observedRunningTime="2025-11-24 00:18:41.503583408 +0000 UTC m=+25.183360646" watchObservedRunningTime="2025-11-24 00:18:41.503652727 +0000 UTC m=+25.183429969" Nov 24 00:18:41.550963 kubelet[3165]: E1124 00:18:41.550934 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.550963 kubelet[3165]: W1124 00:18:41.550953 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551113 kubelet[3165]: E1124 00:18:41.550971 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551113 kubelet[3165]: E1124 00:18:41.551093 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551113 kubelet[3165]: W1124 00:18:41.551100 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551113 kubelet[3165]: E1124 00:18:41.551108 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551233 kubelet[3165]: E1124 00:18:41.551214 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551233 kubelet[3165]: W1124 00:18:41.551219 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551233 kubelet[3165]: E1124 00:18:41.551226 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551393 kubelet[3165]: E1124 00:18:41.551370 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551393 kubelet[3165]: W1124 00:18:41.551390 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551456 kubelet[3165]: E1124 00:18:41.551400 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551534 kubelet[3165]: E1124 00:18:41.551521 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551534 kubelet[3165]: W1124 00:18:41.551530 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551594 kubelet[3165]: E1124 00:18:41.551539 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551644 kubelet[3165]: E1124 00:18:41.551633 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551644 kubelet[3165]: W1124 00:18:41.551641 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551710 kubelet[3165]: E1124 00:18:41.551648 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551743 kubelet[3165]: E1124 00:18:41.551734 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551743 kubelet[3165]: W1124 00:18:41.551739 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551801 kubelet[3165]: E1124 00:18:41.551746 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551841 kubelet[3165]: E1124 00:18:41.551832 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551841 kubelet[3165]: W1124 00:18:41.551840 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.551904 kubelet[3165]: E1124 00:18:41.551846 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.551952 kubelet[3165]: E1124 00:18:41.551941 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.551952 kubelet[3165]: W1124 00:18:41.551948 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552011 kubelet[3165]: E1124 00:18:41.551956 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552042 kubelet[3165]: E1124 00:18:41.552039 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552069 kubelet[3165]: W1124 00:18:41.552044 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552069 kubelet[3165]: E1124 00:18:41.552050 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552136 kubelet[3165]: E1124 00:18:41.552131 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552187 kubelet[3165]: W1124 00:18:41.552136 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552187 kubelet[3165]: E1124 00:18:41.552142 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552247 kubelet[3165]: E1124 00:18:41.552241 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552275 kubelet[3165]: W1124 00:18:41.552247 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552275 kubelet[3165]: E1124 00:18:41.552253 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552342 kubelet[3165]: E1124 00:18:41.552338 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552378 kubelet[3165]: W1124 00:18:41.552343 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552378 kubelet[3165]: E1124 00:18:41.552349 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552438 kubelet[3165]: E1124 00:18:41.552429 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552438 kubelet[3165]: W1124 00:18:41.552434 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552498 kubelet[3165]: E1124 00:18:41.552440 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.552528 kubelet[3165]: E1124 00:18:41.552523 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.552552 kubelet[3165]: W1124 00:18:41.552528 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.552552 kubelet[3165]: E1124 00:18:41.552534 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.563797 kubelet[3165]: E1124 00:18:41.563775 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.563797 kubelet[3165]: W1124 00:18:41.563793 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.563908 kubelet[3165]: E1124 00:18:41.563806 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564018 kubelet[3165]: E1124 00:18:41.563929 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564018 kubelet[3165]: W1124 00:18:41.563935 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564018 kubelet[3165]: E1124 00:18:41.563943 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564106 kubelet[3165]: E1124 00:18:41.564091 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564106 kubelet[3165]: W1124 00:18:41.564103 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564184 kubelet[3165]: E1124 00:18:41.564119 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564262 kubelet[3165]: E1124 00:18:41.564241 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564262 kubelet[3165]: W1124 00:18:41.564258 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564321 kubelet[3165]: E1124 00:18:41.564274 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564394 kubelet[3165]: E1124 00:18:41.564385 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564394 kubelet[3165]: W1124 00:18:41.564392 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564447 kubelet[3165]: E1124 00:18:41.564401 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564549 kubelet[3165]: E1124 00:18:41.564536 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564549 kubelet[3165]: W1124 00:18:41.564546 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564598 kubelet[3165]: E1124 00:18:41.564564 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564799 kubelet[3165]: E1124 00:18:41.564787 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564799 kubelet[3165]: W1124 00:18:41.564796 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.564852 kubelet[3165]: E1124 00:18:41.564807 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.564989 kubelet[3165]: E1124 00:18:41.564968 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.564989 kubelet[3165]: W1124 00:18:41.564989 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565039 kubelet[3165]: E1124 00:18:41.565005 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565136 kubelet[3165]: E1124 00:18:41.565121 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565136 kubelet[3165]: W1124 00:18:41.565133 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565258 kubelet[3165]: E1124 00:18:41.565194 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565320 kubelet[3165]: E1124 00:18:41.565261 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565320 kubelet[3165]: W1124 00:18:41.565267 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565320 kubelet[3165]: E1124 00:18:41.565282 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565407 kubelet[3165]: E1124 00:18:41.565357 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565407 kubelet[3165]: W1124 00:18:41.565362 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565407 kubelet[3165]: E1124 00:18:41.565374 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565507 kubelet[3165]: E1124 00:18:41.565474 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565507 kubelet[3165]: W1124 00:18:41.565478 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565507 kubelet[3165]: E1124 00:18:41.565492 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565639 kubelet[3165]: E1124 00:18:41.565623 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565639 kubelet[3165]: W1124 00:18:41.565636 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565688 kubelet[3165]: E1124 00:18:41.565646 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.565905 kubelet[3165]: E1124 00:18:41.565897 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.565905 kubelet[3165]: W1124 00:18:41.565904 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.565961 kubelet[3165]: E1124 00:18:41.565914 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.566029 kubelet[3165]: E1124 00:18:41.566019 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.566029 kubelet[3165]: W1124 00:18:41.566027 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.566080 kubelet[3165]: E1124 00:18:41.566033 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.566138 kubelet[3165]: E1124 00:18:41.566113 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.566138 kubelet[3165]: W1124 00:18:41.566133 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.566232 kubelet[3165]: E1124 00:18:41.566140 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.566326 kubelet[3165]: E1124 00:18:41.566313 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.566326 kubelet[3165]: W1124 00:18:41.566324 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.566404 kubelet[3165]: E1124 00:18:41.566332 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.566738 kubelet[3165]: E1124 00:18:41.566717 3165 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:18:41.566738 kubelet[3165]: W1124 00:18:41.566736 3165 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:18:41.566794 kubelet[3165]: E1124 00:18:41.566748 3165 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:18:41.728880 containerd[1704]: time="2025-11-24T00:18:41.728841257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:41.731658 containerd[1704]: time="2025-11-24T00:18:41.731605097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:18:41.734720 containerd[1704]: time="2025-11-24T00:18:41.734677689Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:41.741178 containerd[1704]: time="2025-11-24T00:18:41.740995121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:41.741453 containerd[1704]: time="2025-11-24T00:18:41.741433922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.128246221s" Nov 24 00:18:41.741518 containerd[1704]: time="2025-11-24T00:18:41.741506478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:18:41.743691 containerd[1704]: time="2025-11-24T00:18:41.743654801Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:18:41.785235 containerd[1704]: time="2025-11-24T00:18:41.783600481Z" level=info msg="Container 5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:41.802761 containerd[1704]: time="2025-11-24T00:18:41.802733865Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb\"" Nov 24 00:18:41.803780 containerd[1704]: time="2025-11-24T00:18:41.803729293Z" level=info msg="StartContainer for \"5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb\"" Nov 24 00:18:41.805122 containerd[1704]: time="2025-11-24T00:18:41.805094135Z" level=info msg="connecting to shim 5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb" address="unix:///run/containerd/s/0165d5902903db9b4f8030b6d5331272f2797acaabc9b3bca82a162113b3bc1b" protocol=ttrpc version=3 Nov 24 00:18:41.826548 systemd[1]: Started cri-containerd-5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb.scope - libcontainer container 5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb. Nov 24 00:18:41.896539 containerd[1704]: time="2025-11-24T00:18:41.896472984Z" level=info msg="StartContainer for \"5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb\" returns successfully" Nov 24 00:18:41.902131 systemd[1]: cri-containerd-5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb.scope: Deactivated successfully. Nov 24 00:18:41.905420 containerd[1704]: time="2025-11-24T00:18:41.905369935Z" level=info msg="received container exit event container_id:\"5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb\" id:\"5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb\" pid:3834 exited_at:{seconds:1763943521 nanos:905022919}" Nov 24 00:18:41.921977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5124486f214db9fab27f39e353af4aa4dc64b648e204ad6ab2c3f9eb31f7aafb-rootfs.mount: Deactivated successfully. Nov 24 00:18:42.411134 kubelet[3165]: E1124 00:18:42.411020 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:42.494974 kubelet[3165]: I1124 00:18:42.494947 3165 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:18:44.411964 kubelet[3165]: E1124 00:18:44.410881 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:44.500471 containerd[1704]: time="2025-11-24T00:18:44.500231204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:18:46.412932 kubelet[3165]: E1124 00:18:46.411110 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:48.166920 containerd[1704]: time="2025-11-24T00:18:48.166866242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:48.169824 containerd[1704]: time="2025-11-24T00:18:48.169802407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:18:48.172952 containerd[1704]: time="2025-11-24T00:18:48.172929916Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:48.177905 containerd[1704]: time="2025-11-24T00:18:48.177875986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:48.178942 containerd[1704]: time="2025-11-24T00:18:48.178629029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.678356651s" Nov 24 00:18:48.178942 containerd[1704]: time="2025-11-24T00:18:48.178656045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:18:48.181346 containerd[1704]: time="2025-11-24T00:18:48.181280532Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:18:48.205856 containerd[1704]: time="2025-11-24T00:18:48.204305051Z" level=info msg="Container c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:48.222761 containerd[1704]: time="2025-11-24T00:18:48.222733912Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4\"" Nov 24 00:18:48.223179 containerd[1704]: time="2025-11-24T00:18:48.223131721Z" level=info msg="StartContainer for \"c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4\"" Nov 24 00:18:48.224609 containerd[1704]: time="2025-11-24T00:18:48.224577707Z" level=info msg="connecting to shim c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4" address="unix:///run/containerd/s/0165d5902903db9b4f8030b6d5331272f2797acaabc9b3bca82a162113b3bc1b" protocol=ttrpc version=3 Nov 24 00:18:48.247330 systemd[1]: Started cri-containerd-c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4.scope - libcontainer container c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4. Nov 24 00:18:48.308256 containerd[1704]: time="2025-11-24T00:18:48.308227513Z" level=info msg="StartContainer for \"c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4\" returns successfully" Nov 24 00:18:48.411257 kubelet[3165]: E1124 00:18:48.411156 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:49.830254 containerd[1704]: time="2025-11-24T00:18:49.830151109Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:18:49.832285 systemd[1]: cri-containerd-c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4.scope: Deactivated successfully. Nov 24 00:18:49.832542 systemd[1]: cri-containerd-c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4.scope: Consumed 421ms CPU time, 193.2M memory peak, 171.3M written to disk. Nov 24 00:18:49.833982 containerd[1704]: time="2025-11-24T00:18:49.833957123Z" level=info msg="received container exit event container_id:\"c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4\" id:\"c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4\" pid:3894 exited_at:{seconds:1763943529 nanos:833091236}" Nov 24 00:18:49.855142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11bba2a47b76c6134bbfc890b9f245181b44e7ea7f7493e99ddecf33332b7f4-rootfs.mount: Deactivated successfully. Nov 24 00:18:49.924025 kubelet[3165]: I1124 00:18:49.923998 3165 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:18:49.967296 systemd[1]: Created slice kubepods-burstable-pod364daebb_f821_452e_8e18_337f9a9c926f.slice - libcontainer container kubepods-burstable-pod364daebb_f821_452e_8e18_337f9a9c926f.slice. Nov 24 00:18:49.979945 kubelet[3165]: W1124 00:18:49.979908 3165 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4459.2.1-a-8bf8e53aa8" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459.2.1-a-8bf8e53aa8' and this object Nov 24 00:18:49.980068 kubelet[3165]: E1124 00:18:49.979953 3165 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4459.2.1-a-8bf8e53aa8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459.2.1-a-8bf8e53aa8' and this object" logger="UnhandledError" Nov 24 00:18:49.989473 kubelet[3165]: W1124 00:18:49.989449 3165 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4459.2.1-a-8bf8e53aa8" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.1-a-8bf8e53aa8' and this object Nov 24 00:18:49.989635 kubelet[3165]: E1124 00:18:49.989611 3165 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4459.2.1-a-8bf8e53aa8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.1-a-8bf8e53aa8' and this object" logger="UnhandledError" Nov 24 00:18:49.989980 systemd[1]: Created slice kubepods-besteffort-pod59c9a609_1992_4463_b755_389571dcaa93.slice - libcontainer container kubepods-besteffort-pod59c9a609_1992_4463_b755_389571dcaa93.slice. Nov 24 00:18:49.999111 systemd[1]: Created slice kubepods-burstable-podceabcd19_99bf_4acc_aafa_9d2516e3bf94.slice - libcontainer container kubepods-burstable-podceabcd19_99bf_4acc_aafa_9d2516e3bf94.slice. Nov 24 00:18:50.008130 systemd[1]: Created slice kubepods-besteffort-podad4c919c_797e_428c_84e6_68836a861659.slice - libcontainer container kubepods-besteffort-podad4c919c_797e_428c_84e6_68836a861659.slice. Nov 24 00:18:50.012675 systemd[1]: Created slice kubepods-besteffort-podd05efb04_97c4_4681_b343_8c87d932c961.slice - libcontainer container kubepods-besteffort-podd05efb04_97c4_4681_b343_8c87d932c961.slice. Nov 24 00:18:50.017084 kubelet[3165]: I1124 00:18:50.016796 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d3a110a-eb9c-4905-82ad-09bfe36d2064-tigera-ca-bundle\") pod \"calico-kube-controllers-7b58cdd7d9-2thpc\" (UID: \"8d3a110a-eb9c-4905-82ad-09bfe36d2064\") " pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" Nov 24 00:18:50.018188 kubelet[3165]: I1124 00:18:50.018144 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz62l\" (UniqueName: \"kubernetes.io/projected/ad4c919c-797e-428c-84e6-68836a861659-kube-api-access-tz62l\") pod \"whisker-66cd456b8f-jdwfj\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " pod="calico-system/whisker-66cd456b8f-jdwfj" Nov 24 00:18:50.018263 kubelet[3165]: I1124 00:18:50.018199 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceabcd19-99bf-4acc-aafa-9d2516e3bf94-config-volume\") pod \"coredns-668d6bf9bc-bbdwm\" (UID: \"ceabcd19-99bf-4acc-aafa-9d2516e3bf94\") " pod="kube-system/coredns-668d6bf9bc-bbdwm" Nov 24 00:18:50.018263 kubelet[3165]: I1124 00:18:50.018229 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggdj\" (UniqueName: \"kubernetes.io/projected/ceabcd19-99bf-4acc-aafa-9d2516e3bf94-kube-api-access-4ggdj\") pod \"coredns-668d6bf9bc-bbdwm\" (UID: \"ceabcd19-99bf-4acc-aafa-9d2516e3bf94\") " pod="kube-system/coredns-668d6bf9bc-bbdwm" Nov 24 00:18:50.018263 kubelet[3165]: I1124 00:18:50.018247 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kh2z\" (UniqueName: \"kubernetes.io/projected/59c9a609-1992-4463-b755-389571dcaa93-kube-api-access-9kh2z\") pod \"calico-apiserver-d88c99f6b-g5jrw\" (UID: \"59c9a609-1992-4463-b755-389571dcaa93\") " pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" Nov 24 00:18:50.018347 kubelet[3165]: I1124 00:18:50.018266 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d4961d-4fb6-4f95-8d11-3d57944631db-config\") pod \"goldmane-666569f655-wzcms\" (UID: \"45d4961d-4fb6-4f95-8d11-3d57944631db\") " pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:50.018347 kubelet[3165]: I1124 00:18:50.018285 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle\") pod \"whisker-66cd456b8f-jdwfj\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " pod="calico-system/whisker-66cd456b8f-jdwfj" Nov 24 00:18:50.018347 kubelet[3165]: I1124 00:18:50.018305 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d05efb04-97c4-4681-b343-8c87d932c961-calico-apiserver-certs\") pod \"calico-apiserver-d88c99f6b-q6djh\" (UID: \"d05efb04-97c4-4681-b343-8c87d932c961\") " pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" Nov 24 00:18:50.018347 kubelet[3165]: I1124 00:18:50.018324 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/45d4961d-4fb6-4f95-8d11-3d57944631db-goldmane-key-pair\") pod \"goldmane-666569f655-wzcms\" (UID: \"45d4961d-4fb6-4f95-8d11-3d57944631db\") " pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:50.018443 kubelet[3165]: I1124 00:18:50.018345 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxvn\" (UniqueName: \"kubernetes.io/projected/45d4961d-4fb6-4f95-8d11-3d57944631db-kube-api-access-cmxvn\") pod \"goldmane-666569f655-wzcms\" (UID: \"45d4961d-4fb6-4f95-8d11-3d57944631db\") " pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:50.018443 kubelet[3165]: I1124 00:18:50.018364 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/59c9a609-1992-4463-b755-389571dcaa93-calico-apiserver-certs\") pod \"calico-apiserver-d88c99f6b-g5jrw\" (UID: \"59c9a609-1992-4463-b755-389571dcaa93\") " pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" Nov 24 00:18:50.018443 kubelet[3165]: I1124 00:18:50.018385 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgwh\" (UniqueName: \"kubernetes.io/projected/364daebb-f821-452e-8e18-337f9a9c926f-kube-api-access-bbgwh\") pod \"coredns-668d6bf9bc-d8vnm\" (UID: \"364daebb-f821-452e-8e18-337f9a9c926f\") " pod="kube-system/coredns-668d6bf9bc-d8vnm" Nov 24 00:18:50.018443 kubelet[3165]: I1124 00:18:50.018406 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdxwf\" (UniqueName: \"kubernetes.io/projected/8d3a110a-eb9c-4905-82ad-09bfe36d2064-kube-api-access-rdxwf\") pod \"calico-kube-controllers-7b58cdd7d9-2thpc\" (UID: \"8d3a110a-eb9c-4905-82ad-09bfe36d2064\") " pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" Nov 24 00:18:50.018443 kubelet[3165]: I1124 00:18:50.018427 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/364daebb-f821-452e-8e18-337f9a9c926f-config-volume\") pod \"coredns-668d6bf9bc-d8vnm\" (UID: \"364daebb-f821-452e-8e18-337f9a9c926f\") " pod="kube-system/coredns-668d6bf9bc-d8vnm" Nov 24 00:18:50.018562 kubelet[3165]: I1124 00:18:50.018446 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad4c919c-797e-428c-84e6-68836a861659-whisker-backend-key-pair\") pod \"whisker-66cd456b8f-jdwfj\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " pod="calico-system/whisker-66cd456b8f-jdwfj" Nov 24 00:18:50.018562 kubelet[3165]: I1124 00:18:50.018467 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d4961d-4fb6-4f95-8d11-3d57944631db-goldmane-ca-bundle\") pod \"goldmane-666569f655-wzcms\" (UID: \"45d4961d-4fb6-4f95-8d11-3d57944631db\") " pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:50.018562 kubelet[3165]: I1124 00:18:50.018492 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7k9m\" (UniqueName: \"kubernetes.io/projected/d05efb04-97c4-4681-b343-8c87d932c961-kube-api-access-l7k9m\") pod \"calico-apiserver-d88c99f6b-q6djh\" (UID: \"d05efb04-97c4-4681-b343-8c87d932c961\") " pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" Nov 24 00:18:50.018677 systemd[1]: Created slice kubepods-besteffort-pod45d4961d_4fb6_4f95_8d11_3d57944631db.slice - libcontainer container kubepods-besteffort-pod45d4961d_4fb6_4f95_8d11_3d57944631db.slice. Nov 24 00:18:50.026010 systemd[1]: Created slice kubepods-besteffort-pod8d3a110a_eb9c_4905_82ad_09bfe36d2064.slice - libcontainer container kubepods-besteffort-pod8d3a110a_eb9c_4905_82ad_09bfe36d2064.slice. Nov 24 00:18:50.271543 containerd[1704]: time="2025-11-24T00:18:50.271504986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d8vnm,Uid:364daebb-f821-452e-8e18-337f9a9c926f,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:50.303980 containerd[1704]: time="2025-11-24T00:18:50.303944605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbdwm,Uid:ceabcd19-99bf-4acc-aafa-9d2516e3bf94,Namespace:kube-system,Attempt:0,}" Nov 24 00:18:50.323109 containerd[1704]: time="2025-11-24T00:18:50.323078937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wzcms,Uid:45d4961d-4fb6-4f95-8d11-3d57944631db,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:50.329667 containerd[1704]: time="2025-11-24T00:18:50.329634095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b58cdd7d9-2thpc,Uid:8d3a110a-eb9c-4905-82ad-09bfe36d2064,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:50.415353 systemd[1]: Created slice kubepods-besteffort-pod377ffa75_e56f_4a86_9355_a323312d6a89.slice - libcontainer container kubepods-besteffort-pod377ffa75_e56f_4a86_9355_a323312d6a89.slice. Nov 24 00:18:50.417424 containerd[1704]: time="2025-11-24T00:18:50.417393995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6pwc,Uid:377ffa75-e56f-4a86-9355-a323312d6a89,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:51.121190 kubelet[3165]: E1124 00:18:51.120277 3165 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 24 00:18:51.121190 kubelet[3165]: E1124 00:18:51.120377 3165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d05efb04-97c4-4681-b343-8c87d932c961-calico-apiserver-certs podName:d05efb04-97c4-4681-b343-8c87d932c961 nodeName:}" failed. No retries permitted until 2025-11-24 00:18:51.6203544 +0000 UTC m=+35.300131643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d05efb04-97c4-4681-b343-8c87d932c961-calico-apiserver-certs") pod "calico-apiserver-d88c99f6b-q6djh" (UID: "d05efb04-97c4-4681-b343-8c87d932c961") : failed to sync secret cache: timed out waiting for the condition Nov 24 00:18:51.122922 kubelet[3165]: E1124 00:18:51.121842 3165 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 24 00:18:51.122922 kubelet[3165]: E1124 00:18:51.121910 3165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9a609-1992-4463-b755-389571dcaa93-calico-apiserver-certs podName:59c9a609-1992-4463-b755-389571dcaa93 nodeName:}" failed. No retries permitted until 2025-11-24 00:18:51.621892277 +0000 UTC m=+35.301669507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/59c9a609-1992-4463-b755-389571dcaa93-calico-apiserver-certs") pod "calico-apiserver-d88c99f6b-g5jrw" (UID: "59c9a609-1992-4463-b755-389571dcaa93") : failed to sync secret cache: timed out waiting for the condition Nov 24 00:18:51.122922 kubelet[3165]: E1124 00:18:51.121940 3165 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 00:18:51.122922 kubelet[3165]: E1124 00:18:51.121971 3165 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle podName:ad4c919c-797e-428c-84e6-68836a861659 nodeName:}" failed. No retries permitted until 2025-11-24 00:18:51.621963513 +0000 UTC m=+35.301740752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle") pod "whisker-66cd456b8f-jdwfj" (UID: "ad4c919c-797e-428c-84e6-68836a861659") : failed to sync configmap cache: timed out waiting for the condition Nov 24 00:18:51.150475 containerd[1704]: time="2025-11-24T00:18:51.150400721Z" level=error msg="Failed to destroy network for sandbox \"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.150988 containerd[1704]: time="2025-11-24T00:18:51.150925089Z" level=error msg="Failed to destroy network for sandbox \"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.151100 containerd[1704]: time="2025-11-24T00:18:51.151084291Z" level=error msg="Failed to destroy network for sandbox \"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.155089 containerd[1704]: time="2025-11-24T00:18:51.155033130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbdwm,Uid:ceabcd19-99bf-4acc-aafa-9d2516e3bf94,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.156182 kubelet[3165]: E1124 00:18:51.156037 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.156182 kubelet[3165]: E1124 00:18:51.156111 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bbdwm" Nov 24 00:18:51.156182 kubelet[3165]: E1124 00:18:51.156136 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bbdwm" Nov 24 00:18:51.156311 kubelet[3165]: E1124 00:18:51.156204 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bbdwm_kube-system(ceabcd19-99bf-4acc-aafa-9d2516e3bf94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bbdwm_kube-system(ceabcd19-99bf-4acc-aafa-9d2516e3bf94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f905e453000f41bcc87803a145af6bb96dc8ca8e56007c33ab4fc209db2cc61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bbdwm" podUID="ceabcd19-99bf-4acc-aafa-9d2516e3bf94" Nov 24 00:18:51.157987 containerd[1704]: time="2025-11-24T00:18:51.157934616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d8vnm,Uid:364daebb-f821-452e-8e18-337f9a9c926f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.158418 kubelet[3165]: E1124 00:18:51.158387 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.158491 kubelet[3165]: E1124 00:18:51.158437 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d8vnm" Nov 24 00:18:51.158491 kubelet[3165]: E1124 00:18:51.158461 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d8vnm" Nov 24 00:18:51.158697 kubelet[3165]: E1124 00:18:51.158497 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d8vnm_kube-system(364daebb-f821-452e-8e18-337f9a9c926f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d8vnm_kube-system(364daebb-f821-452e-8e18-337f9a9c926f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ac32954b74ac467daff810b0da4ea8bbd6477b1810e26d0084a8631ea25a088\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d8vnm" podUID="364daebb-f821-452e-8e18-337f9a9c926f" Nov 24 00:18:51.161356 containerd[1704]: time="2025-11-24T00:18:51.161221947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wzcms,Uid:45d4961d-4fb6-4f95-8d11-3d57944631db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.161699 kubelet[3165]: E1124 00:18:51.161673 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.161876 kubelet[3165]: E1124 00:18:51.161715 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:51.161876 kubelet[3165]: E1124 00:18:51.161741 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wzcms" Nov 24 00:18:51.161876 kubelet[3165]: E1124 00:18:51.161775 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"228a6a65978cb1aefd39370c4c8c8cb1b8d3fc80c40bf8de52a07f05f9913670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:18:51.163671 containerd[1704]: time="2025-11-24T00:18:51.163445852Z" level=error msg="Failed to destroy network for sandbox \"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.164526 containerd[1704]: time="2025-11-24T00:18:51.164499371Z" level=error msg="Failed to destroy network for sandbox \"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.167705 containerd[1704]: time="2025-11-24T00:18:51.167270121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b58cdd7d9-2thpc,Uid:8d3a110a-eb9c-4905-82ad-09bfe36d2064,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.168025 kubelet[3165]: E1124 00:18:51.168002 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.168091 kubelet[3165]: E1124 00:18:51.168035 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" Nov 24 00:18:51.168091 kubelet[3165]: E1124 00:18:51.168054 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" Nov 24 00:18:51.168151 kubelet[3165]: E1124 00:18:51.168085 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bd85ca7c85dc1e5ac8237cec081342edfeeaebbba349c1d84cd1593b0851862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:18:51.170811 containerd[1704]: time="2025-11-24T00:18:51.170734922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6pwc,Uid:377ffa75-e56f-4a86-9355-a323312d6a89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.170921 kubelet[3165]: E1124 00:18:51.170896 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.170961 kubelet[3165]: E1124 00:18:51.170947 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:51.171001 kubelet[3165]: E1124 00:18:51.170969 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6pwc" Nov 24 00:18:51.171036 kubelet[3165]: E1124 00:18:51.171002 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"099b981dfc6a678f3befc7e8e4c5be349ab90f37f7e31d772b7e1764eb7150bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:18:51.514124 containerd[1704]: time="2025-11-24T00:18:51.513773464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:18:51.796718 containerd[1704]: time="2025-11-24T00:18:51.796603505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-g5jrw,Uid:59c9a609-1992-4463-b755-389571dcaa93,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:18:51.811692 containerd[1704]: time="2025-11-24T00:18:51.811660702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cd456b8f-jdwfj,Uid:ad4c919c-797e-428c-84e6-68836a861659,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:51.815734 containerd[1704]: time="2025-11-24T00:18:51.815672689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-q6djh,Uid:d05efb04-97c4-4681-b343-8c87d932c961,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:18:51.850994 containerd[1704]: time="2025-11-24T00:18:51.850913307Z" level=error msg="Failed to destroy network for sandbox \"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.856267 containerd[1704]: time="2025-11-24T00:18:51.856204083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-g5jrw,Uid:59c9a609-1992-4463-b755-389571dcaa93,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.857327 kubelet[3165]: E1124 00:18:51.856588 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.857327 kubelet[3165]: E1124 00:18:51.857268 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" Nov 24 00:18:51.857327 kubelet[3165]: E1124 00:18:51.857292 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" Nov 24 00:18:51.857470 kubelet[3165]: E1124 00:18:51.857438 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea1470c4fa5db48c9633112a524eb86074266991a192e18dd63cad4952529fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:18:51.861201 systemd[1]: run-netns-cni\x2d75ed79cd\x2d02af\x2de30a\x2d7de7\x2d03624196d1a8.mount: Deactivated successfully. Nov 24 00:18:51.861309 systemd[1]: run-netns-cni\x2d55ab7e89\x2de110\x2d7ebf\x2db3bb\x2d52f6062af937.mount: Deactivated successfully. Nov 24 00:18:51.861365 systemd[1]: run-netns-cni\x2ded746089\x2df7f3\x2d2159\x2df353\x2d12827537671a.mount: Deactivated successfully. Nov 24 00:18:51.861423 systemd[1]: run-netns-cni\x2d48c467bb\x2df340\x2d97c7\x2d1fbf\x2d4713e503f7b2.mount: Deactivated successfully. Nov 24 00:18:51.861472 systemd[1]: run-netns-cni\x2d0a853540\x2d1eba\x2d3550\x2d239e\x2dad20b9d07a46.mount: Deactivated successfully. Nov 24 00:18:51.911474 containerd[1704]: time="2025-11-24T00:18:51.910720529Z" level=error msg="Failed to destroy network for sandbox \"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.913952 systemd[1]: run-netns-cni\x2d2455b515\x2df8a7\x2dba5c\x2d79dc\x2db0f4bf657806.mount: Deactivated successfully. Nov 24 00:18:51.916471 containerd[1704]: time="2025-11-24T00:18:51.916391084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-q6djh,Uid:d05efb04-97c4-4681-b343-8c87d932c961,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.917064 kubelet[3165]: E1124 00:18:51.916728 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.917064 kubelet[3165]: E1124 00:18:51.916780 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" Nov 24 00:18:51.917064 kubelet[3165]: E1124 00:18:51.916802 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" Nov 24 00:18:51.917239 kubelet[3165]: E1124 00:18:51.916847 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ec52de5efbf20375559295a19bce16d657c08f51b94fbf01b3d2b3e402fb8df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:18:51.919100 containerd[1704]: time="2025-11-24T00:18:51.919063052Z" level=error msg="Failed to destroy network for sandbox \"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.921301 systemd[1]: run-netns-cni\x2d5a6c7121\x2dba5d\x2d46b2\x2dbe5f\x2d22a3c4aa8d8e.mount: Deactivated successfully. Nov 24 00:18:51.924531 containerd[1704]: time="2025-11-24T00:18:51.924503467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cd456b8f-jdwfj,Uid:ad4c919c-797e-428c-84e6-68836a861659,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.924716 kubelet[3165]: E1124 00:18:51.924693 3165 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:18:51.924793 kubelet[3165]: E1124 00:18:51.924734 3165 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cd456b8f-jdwfj" Nov 24 00:18:51.924793 kubelet[3165]: E1124 00:18:51.924754 3165 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cd456b8f-jdwfj" Nov 24 00:18:51.924867 kubelet[3165]: E1124 00:18:51.924797 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66cd456b8f-jdwfj_calico-system(ad4c919c-797e-428c-84e6-68836a861659)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66cd456b8f-jdwfj_calico-system(ad4c919c-797e-428c-84e6-68836a861659)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b08af29279589f3c1c90de4ddb7df726dc102e0bb7396393fe3219df5a43f911\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66cd456b8f-jdwfj" podUID="ad4c919c-797e-428c-84e6-68836a861659" Nov 24 00:18:54.185735 kubelet[3165]: I1124 00:18:54.185699 3165 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:18:55.777127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581631737.mount: Deactivated successfully. Nov 24 00:18:55.807622 containerd[1704]: time="2025-11-24T00:18:55.807583325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:55.810541 containerd[1704]: time="2025-11-24T00:18:55.810448906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:18:55.813617 containerd[1704]: time="2025-11-24T00:18:55.813590030Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:55.817410 containerd[1704]: time="2025-11-24T00:18:55.817357963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:18:55.817757 containerd[1704]: time="2025-11-24T00:18:55.817735786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.303920577s" Nov 24 00:18:55.817836 containerd[1704]: time="2025-11-24T00:18:55.817824366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:18:55.830517 containerd[1704]: time="2025-11-24T00:18:55.830494438Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:18:55.853997 containerd[1704]: time="2025-11-24T00:18:55.853969227Z" level=info msg="Container 85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:18:55.871043 containerd[1704]: time="2025-11-24T00:18:55.871013529Z" level=info msg="CreateContainer within sandbox \"9c2ae7360c6bfc605f88d1396c651459db9994d0a362a59a17564d0d82923abc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e\"" Nov 24 00:18:55.871609 containerd[1704]: time="2025-11-24T00:18:55.871586913Z" level=info msg="StartContainer for \"85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e\"" Nov 24 00:18:55.873003 containerd[1704]: time="2025-11-24T00:18:55.872926708Z" level=info msg="connecting to shim 85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e" address="unix:///run/containerd/s/0165d5902903db9b4f8030b6d5331272f2797acaabc9b3bca82a162113b3bc1b" protocol=ttrpc version=3 Nov 24 00:18:55.894301 systemd[1]: Started cri-containerd-85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e.scope - libcontainer container 85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e. Nov 24 00:18:55.965004 containerd[1704]: time="2025-11-24T00:18:55.964958962Z" level=info msg="StartContainer for \"85e25315f6ec6e2ed431b1b82b0da06395b1fa340678347d85aee18d2b98d45e\" returns successfully" Nov 24 00:18:56.233621 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:18:56.233738 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:18:56.364195 kubelet[3165]: I1124 00:18:56.362720 3165 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz62l\" (UniqueName: \"kubernetes.io/projected/ad4c919c-797e-428c-84e6-68836a861659-kube-api-access-tz62l\") pod \"ad4c919c-797e-428c-84e6-68836a861659\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " Nov 24 00:18:56.364619 kubelet[3165]: I1124 00:18:56.364600 3165 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad4c919c-797e-428c-84e6-68836a861659-whisker-backend-key-pair\") pod \"ad4c919c-797e-428c-84e6-68836a861659\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " Nov 24 00:18:56.364692 kubelet[3165]: I1124 00:18:56.364682 3165 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle\") pod \"ad4c919c-797e-428c-84e6-68836a861659\" (UID: \"ad4c919c-797e-428c-84e6-68836a861659\") " Nov 24 00:18:56.365067 kubelet[3165]: I1124 00:18:56.365049 3165 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ad4c919c-797e-428c-84e6-68836a861659" (UID: "ad4c919c-797e-428c-84e6-68836a861659"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:18:56.367292 kubelet[3165]: I1124 00:18:56.367262 3165 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4c919c-797e-428c-84e6-68836a861659-kube-api-access-tz62l" (OuterVolumeSpecName: "kube-api-access-tz62l") pod "ad4c919c-797e-428c-84e6-68836a861659" (UID: "ad4c919c-797e-428c-84e6-68836a861659"). InnerVolumeSpecName "kube-api-access-tz62l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:18:56.370326 kubelet[3165]: I1124 00:18:56.370300 3165 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4c919c-797e-428c-84e6-68836a861659-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ad4c919c-797e-428c-84e6-68836a861659" (UID: "ad4c919c-797e-428c-84e6-68836a861659"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:18:56.419714 systemd[1]: Removed slice kubepods-besteffort-podad4c919c_797e_428c_84e6_68836a861659.slice - libcontainer container kubepods-besteffort-podad4c919c_797e_428c_84e6_68836a861659.slice. Nov 24 00:18:56.465358 kubelet[3165]: I1124 00:18:56.465330 3165 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tz62l\" (UniqueName: \"kubernetes.io/projected/ad4c919c-797e-428c-84e6-68836a861659-kube-api-access-tz62l\") on node \"ci-4459.2.1-a-8bf8e53aa8\" DevicePath \"\"" Nov 24 00:18:56.465358 kubelet[3165]: I1124 00:18:56.465361 3165 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad4c919c-797e-428c-84e6-68836a861659-whisker-backend-key-pair\") on node \"ci-4459.2.1-a-8bf8e53aa8\" DevicePath \"\"" Nov 24 00:18:56.465589 kubelet[3165]: I1124 00:18:56.465380 3165 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4c919c-797e-428c-84e6-68836a861659-whisker-ca-bundle\") on node \"ci-4459.2.1-a-8bf8e53aa8\" DevicePath \"\"" Nov 24 00:18:56.589761 kubelet[3165]: I1124 00:18:56.589355 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xltbc" podStartSLOduration=1.567839898 podStartE2EDuration="18.589237799s" podCreationTimestamp="2025-11-24 00:18:38 +0000 UTC" firstStartedPulling="2025-11-24 00:18:38.797114741 +0000 UTC m=+22.476891976" lastFinishedPulling="2025-11-24 00:18:55.818512642 +0000 UTC m=+39.498289877" observedRunningTime="2025-11-24 00:18:56.588361569 +0000 UTC m=+40.268138812" watchObservedRunningTime="2025-11-24 00:18:56.589237799 +0000 UTC m=+40.269015040" Nov 24 00:18:56.613840 systemd[1]: Created slice kubepods-besteffort-pod55ebf745_9192_40af_99eb_e78240db2491.slice - libcontainer container kubepods-besteffort-pod55ebf745_9192_40af_99eb_e78240db2491.slice. Nov 24 00:18:56.667447 kubelet[3165]: I1124 00:18:56.667417 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5pcz\" (UniqueName: \"kubernetes.io/projected/55ebf745-9192-40af-99eb-e78240db2491-kube-api-access-g5pcz\") pod \"whisker-5c79b749df-5s9h6\" (UID: \"55ebf745-9192-40af-99eb-e78240db2491\") " pod="calico-system/whisker-5c79b749df-5s9h6" Nov 24 00:18:56.667569 kubelet[3165]: I1124 00:18:56.667453 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ebf745-9192-40af-99eb-e78240db2491-whisker-ca-bundle\") pod \"whisker-5c79b749df-5s9h6\" (UID: \"55ebf745-9192-40af-99eb-e78240db2491\") " pod="calico-system/whisker-5c79b749df-5s9h6" Nov 24 00:18:56.667569 kubelet[3165]: I1124 00:18:56.667496 3165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55ebf745-9192-40af-99eb-e78240db2491-whisker-backend-key-pair\") pod \"whisker-5c79b749df-5s9h6\" (UID: \"55ebf745-9192-40af-99eb-e78240db2491\") " pod="calico-system/whisker-5c79b749df-5s9h6" Nov 24 00:18:56.779749 systemd[1]: var-lib-kubelet-pods-ad4c919c\x2d797e\x2d428c\x2d84e6\x2d68836a861659-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtz62l.mount: Deactivated successfully. Nov 24 00:18:56.780106 systemd[1]: var-lib-kubelet-pods-ad4c919c\x2d797e\x2d428c\x2d84e6\x2d68836a861659-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:18:56.923032 containerd[1704]: time="2025-11-24T00:18:56.922992463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c79b749df-5s9h6,Uid:55ebf745-9192-40af-99eb-e78240db2491,Namespace:calico-system,Attempt:0,}" Nov 24 00:18:57.043602 systemd-networkd[1340]: calidb017fe6fa9: Link UP Nov 24 00:18:57.045301 systemd-networkd[1340]: calidb017fe6fa9: Gained carrier Nov 24 00:18:57.065456 containerd[1704]: 2025-11-24 00:18:56.949 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:18:57.065456 containerd[1704]: 2025-11-24 00:18:56.958 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0 whisker-5c79b749df- calico-system 55ebf745-9192-40af-99eb-e78240db2491 911 0 2025-11-24 00:18:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c79b749df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 whisker-5c79b749df-5s9h6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb017fe6fa9 [] [] }} ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-" Nov 24 00:18:57.065456 containerd[1704]: 2025-11-24 00:18:56.959 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.065456 containerd[1704]: 2025-11-24 00:18:56.979 [INFO][4258] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" HandleID="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.979 [INFO][4258] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" HandleID="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"whisker-5c79b749df-5s9h6", "timestamp":"2025-11-24 00:18:56.979687067 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.979 [INFO][4258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.979 [INFO][4258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.979 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.984 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.987 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.991 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.992 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065673 containerd[1704]: 2025-11-24 00:18:56.994 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:56.994 [INFO][4258] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:56.995 [INFO][4258] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:56.999 [INFO][4258] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:57.006 [INFO][4258] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:57.007 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:57.007 [INFO][4258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:18:57.065895 containerd[1704]: 2025-11-24 00:18:57.007 [INFO][4258] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" HandleID="k8s-pod-network.8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.066047 containerd[1704]: 2025-11-24 00:18:57.009 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0", GenerateName:"whisker-5c79b749df-", Namespace:"calico-system", SelfLink:"", UID:"55ebf745-9192-40af-99eb-e78240db2491", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c79b749df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"whisker-5c79b749df-5s9h6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb017fe6fa9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:18:57.066047 containerd[1704]: 2025-11-24 00:18:57.009 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.129/32] ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.066138 containerd[1704]: 2025-11-24 00:18:57.009 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb017fe6fa9 ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.066138 containerd[1704]: 2025-11-24 00:18:57.044 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.066384 containerd[1704]: 2025-11-24 00:18:57.045 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0", GenerateName:"whisker-5c79b749df-", Namespace:"calico-system", SelfLink:"", UID:"55ebf745-9192-40af-99eb-e78240db2491", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c79b749df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c", Pod:"whisker-5c79b749df-5s9h6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb017fe6fa9", MAC:"a2:b3:54:a8:85:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:18:57.066472 containerd[1704]: 2025-11-24 00:18:57.063 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" Namespace="calico-system" Pod="whisker-5c79b749df-5s9h6" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-whisker--5c79b749df--5s9h6-eth0" Nov 24 00:18:57.107859 containerd[1704]: time="2025-11-24T00:18:57.107785487Z" level=info msg="connecting to shim 8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c" address="unix:///run/containerd/s/16e8e1d3808e48d3718c7efe6877013c5cb1073d496fddb27e43a1b251d1be8d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:18:57.132325 systemd[1]: Started cri-containerd-8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c.scope - libcontainer container 8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c. Nov 24 00:18:57.177198 containerd[1704]: time="2025-11-24T00:18:57.176610388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c79b749df-5s9h6,Uid:55ebf745-9192-40af-99eb-e78240db2491,Namespace:calico-system,Attempt:0,} returns sandbox id \"8adebd67a7d60c0b1ebb3d74ac3700486af88e833300ce7986c56c70e59f655c\"" Nov 24 00:18:57.178742 containerd[1704]: time="2025-11-24T00:18:57.178690724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:18:57.446273 containerd[1704]: time="2025-11-24T00:18:57.446049755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:57.449396 containerd[1704]: time="2025-11-24T00:18:57.449354110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:18:57.449517 containerd[1704]: time="2025-11-24T00:18:57.449363557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:18:57.449620 kubelet[3165]: E1124 00:18:57.449582 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:57.449892 kubelet[3165]: E1124 00:18:57.449634 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:57.449922 kubelet[3165]: E1124 00:18:57.449888 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6fa07e6ad45646be8de5ce808d3bf5bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:57.452199 containerd[1704]: time="2025-11-24T00:18:57.452036387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:18:57.724069 containerd[1704]: time="2025-11-24T00:18:57.723942553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:57.727321 containerd[1704]: time="2025-11-24T00:18:57.727274094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:18:57.728004 containerd[1704]: time="2025-11-24T00:18:57.727377317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:18:57.728059 kubelet[3165]: E1124 00:18:57.727521 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:57.728059 kubelet[3165]: E1124 00:18:57.727572 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:57.728144 kubelet[3165]: E1124 00:18:57.727706 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:57.729335 kubelet[3165]: E1124 00:18:57.729272 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:18:58.198008 systemd-networkd[1340]: vxlan.calico: Link UP Nov 24 00:18:58.198017 systemd-networkd[1340]: vxlan.calico: Gained carrier Nov 24 00:18:58.272263 systemd-networkd[1340]: calidb017fe6fa9: Gained IPv6LL Nov 24 00:18:58.413341 kubelet[3165]: I1124 00:18:58.413302 3165 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4c919c-797e-428c-84e6-68836a861659" path="/var/lib/kubelet/pods/ad4c919c-797e-428c-84e6-68836a861659/volumes" Nov 24 00:18:58.536725 kubelet[3165]: E1124 00:18:58.536438 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:19:00.128308 systemd-networkd[1340]: vxlan.calico: Gained IPv6LL Nov 24 00:19:02.411961 containerd[1704]: time="2025-11-24T00:19:02.411570879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-g5jrw,Uid:59c9a609-1992-4463-b755-389571dcaa93,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:19:02.512040 systemd-networkd[1340]: calib834f658705: Link UP Nov 24 00:19:02.513531 systemd-networkd[1340]: calib834f658705: Gained carrier Nov 24 00:19:02.527669 containerd[1704]: 2025-11-24 00:19:02.452 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0 calico-apiserver-d88c99f6b- calico-apiserver 59c9a609-1992-4463-b755-389571dcaa93 837 0 2025-11-24 00:18:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d88c99f6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 calico-apiserver-d88c99f6b-g5jrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib834f658705 [] [] }} ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-" Nov 24 00:19:02.527669 containerd[1704]: 2025-11-24 00:19:02.452 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.527669 containerd[1704]: 2025-11-24 00:19:02.476 [INFO][4551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" HandleID="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.476 [INFO][4551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" HandleID="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"calico-apiserver-d88c99f6b-g5jrw", "timestamp":"2025-11-24 00:19:02.476517382 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.476 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.476 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.476 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.482 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.485 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.488 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.490 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.527850 containerd[1704]: 2025-11-24 00:19:02.491 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.491 [INFO][4551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.492 [INFO][4551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6 Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.496 [INFO][4551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.506 [INFO][4551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.506 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.506 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:02.528073 containerd[1704]: 2025-11-24 00:19:02.506 [INFO][4551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" HandleID="k8s-pod-network.1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.528286 containerd[1704]: 2025-11-24 00:19:02.509 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0", GenerateName:"calico-apiserver-d88c99f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"59c9a609-1992-4463-b755-389571dcaa93", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d88c99f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"calico-apiserver-d88c99f6b-g5jrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib834f658705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:02.528358 containerd[1704]: 2025-11-24 00:19:02.509 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.130/32] ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.528358 containerd[1704]: 2025-11-24 00:19:02.509 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib834f658705 ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.528358 containerd[1704]: 2025-11-24 00:19:02.514 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.528583 containerd[1704]: 2025-11-24 00:19:02.514 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0", GenerateName:"calico-apiserver-d88c99f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"59c9a609-1992-4463-b755-389571dcaa93", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d88c99f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6", Pod:"calico-apiserver-d88c99f6b-g5jrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib834f658705", MAC:"06:1a:6d:31:aa:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:02.528676 containerd[1704]: 2025-11-24 00:19:02.524 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-g5jrw" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--g5jrw-eth0" Nov 24 00:19:02.575728 containerd[1704]: time="2025-11-24T00:19:02.575562223Z" level=info msg="connecting to shim 1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6" address="unix:///run/containerd/s/5a9ab362bc6a633f8df604c8a6ecf8935523f212a55e644a0ae524e0becc9d8b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:02.601300 systemd[1]: Started cri-containerd-1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6.scope - libcontainer container 1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6. Nov 24 00:19:02.643107 containerd[1704]: time="2025-11-24T00:19:02.643080618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-g5jrw,Uid:59c9a609-1992-4463-b755-389571dcaa93,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1dcc95b48d729220e07556fe8f93e398a7b4f3435b2f0af336de8ad5d0fa71b6\"" Nov 24 00:19:02.644433 containerd[1704]: time="2025-11-24T00:19:02.644399597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:02.902347 containerd[1704]: time="2025-11-24T00:19:02.902302324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:02.906199 containerd[1704]: time="2025-11-24T00:19:02.906153720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:02.906275 containerd[1704]: time="2025-11-24T00:19:02.906251383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:02.906444 kubelet[3165]: E1124 00:19:02.906400 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:02.906766 kubelet[3165]: E1124 00:19:02.906449 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:02.906766 kubelet[3165]: E1124 00:19:02.906587 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kh2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:02.908101 kubelet[3165]: E1124 00:19:02.908028 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:03.549581 kubelet[3165]: E1124 00:19:03.549516 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:04.032310 systemd-networkd[1340]: calib834f658705: Gained IPv6LL Nov 24 00:19:04.411981 containerd[1704]: time="2025-11-24T00:19:04.411922673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wzcms,Uid:45d4961d-4fb6-4f95-8d11-3d57944631db,Namespace:calico-system,Attempt:0,}" Nov 24 00:19:04.413975 containerd[1704]: time="2025-11-24T00:19:04.413795582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6pwc,Uid:377ffa75-e56f-4a86-9355-a323312d6a89,Namespace:calico-system,Attempt:0,}" Nov 24 00:19:04.415820 containerd[1704]: time="2025-11-24T00:19:04.415796317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbdwm,Uid:ceabcd19-99bf-4acc-aafa-9d2516e3bf94,Namespace:kube-system,Attempt:0,}" Nov 24 00:19:04.550144 kubelet[3165]: E1124 00:19:04.550106 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:04.573321 systemd-networkd[1340]: cali27db3273828: Link UP Nov 24 00:19:04.574943 systemd-networkd[1340]: cali27db3273828: Gained carrier Nov 24 00:19:04.591225 containerd[1704]: 2025-11-24 00:19:04.485 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0 csi-node-driver- calico-system 377ffa75-e56f-4a86-9355-a323312d6a89 726 0 2025-11-24 00:18:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 csi-node-driver-z6pwc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali27db3273828 [] [] }} ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-" Nov 24 00:19:04.591225 containerd[1704]: 2025-11-24 00:19:04.486 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591225 containerd[1704]: 2025-11-24 00:19:04.524 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" HandleID="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.524 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" HandleID="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"csi-node-driver-z6pwc", "timestamp":"2025-11-24 00:19:04.524694376 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.524 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.525 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.525 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.531 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.535 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.538 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.541 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591415 containerd[1704]: 2025-11-24 00:19:04.543 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.543 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.544 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43 Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.552 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.563 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.563 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.563 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:04.591656 containerd[1704]: 2025-11-24 00:19:04.563 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" HandleID="k8s-pod-network.469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591806 containerd[1704]: 2025-11-24 00:19:04.566 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"377ffa75-e56f-4a86-9355-a323312d6a89", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"csi-node-driver-z6pwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27db3273828", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.591868 containerd[1704]: 2025-11-24 00:19:04.566 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.131/32] ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591868 containerd[1704]: 2025-11-24 00:19:04.566 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27db3273828 ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591868 containerd[1704]: 2025-11-24 00:19:04.574 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.591939 containerd[1704]: 2025-11-24 00:19:04.575 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"377ffa75-e56f-4a86-9355-a323312d6a89", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43", Pod:"csi-node-driver-z6pwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27db3273828", MAC:"b2:fa:cb:88:27:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.591996 containerd[1704]: 2025-11-24 00:19:04.589 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" Namespace="calico-system" Pod="csi-node-driver-z6pwc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-csi--node--driver--z6pwc-eth0" Nov 24 00:19:04.643322 containerd[1704]: time="2025-11-24T00:19:04.643277503Z" level=info msg="connecting to shim 469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43" address="unix:///run/containerd/s/d78cb12e673e6bcd741c29dc2845075056959b9a2fdabe62ee82af789352b4ba" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:04.667310 systemd[1]: Started cri-containerd-469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43.scope - libcontainer container 469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43. Nov 24 00:19:04.687926 systemd-networkd[1340]: califbc3449359e: Link UP Nov 24 00:19:04.688561 systemd-networkd[1340]: califbc3449359e: Gained carrier Nov 24 00:19:04.718668 containerd[1704]: 2025-11-24 00:19:04.487 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0 goldmane-666569f655- calico-system 45d4961d-4fb6-4f95-8d11-3d57944631db 842 0 2025-11-24 00:18:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 goldmane-666569f655-wzcms eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califbc3449359e [] [] }} ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-" Nov 24 00:19:04.718668 containerd[1704]: 2025-11-24 00:19:04.488 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.718668 containerd[1704]: 2025-11-24 00:19:04.536 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" HandleID="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.538 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" HandleID="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"goldmane-666569f655-wzcms", "timestamp":"2025-11-24 00:19:04.536766017 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.538 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.563 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.564 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.635 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.644 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.648 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.658 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718825 containerd[1704]: 2025-11-24 00:19:04.661 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.661 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.664 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11 Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.672 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:04.718981 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" HandleID="k8s-pod-network.f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.719090 containerd[1704]: 2025-11-24 00:19:04.683 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"45d4961d-4fb6-4f95-8d11-3d57944631db", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"goldmane-666569f655-wzcms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbc3449359e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.719146 containerd[1704]: 2025-11-24 00:19:04.684 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.132/32] ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.719146 containerd[1704]: 2025-11-24 00:19:04.684 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbc3449359e ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.719146 containerd[1704]: 2025-11-24 00:19:04.691 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.719772 containerd[1704]: 2025-11-24 00:19:04.691 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"45d4961d-4fb6-4f95-8d11-3d57944631db", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11", Pod:"goldmane-666569f655-wzcms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbc3449359e", MAC:"8a:21:a1:eb:a5:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.720039 containerd[1704]: 2025-11-24 00:19:04.716 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" Namespace="calico-system" Pod="goldmane-666569f655-wzcms" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-goldmane--666569f655--wzcms-eth0" Nov 24 00:19:04.723589 containerd[1704]: time="2025-11-24T00:19:04.722660509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6pwc,Uid:377ffa75-e56f-4a86-9355-a323312d6a89,Namespace:calico-system,Attempt:0,} returns sandbox id \"469da13ed25302e82c12478b2a48a2e9d076795d715a3b5debd719ec765b1f43\"" Nov 24 00:19:04.725105 containerd[1704]: time="2025-11-24T00:19:04.725078206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:19:04.779116 containerd[1704]: time="2025-11-24T00:19:04.778927233Z" level=info msg="connecting to shim f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11" address="unix:///run/containerd/s/f7c0d25c080b0c452f036b0de2cc1a5b2cec9fe717ea52e6efd7d42550af6126" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:04.784598 systemd-networkd[1340]: calif84bfb3312c: Link UP Nov 24 00:19:04.785149 systemd-networkd[1340]: calif84bfb3312c: Gained carrier Nov 24 00:19:04.807392 containerd[1704]: 2025-11-24 00:19:04.504 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0 coredns-668d6bf9bc- kube-system ceabcd19-99bf-4acc-aafa-9d2516e3bf94 838 0 2025-11-24 00:18:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 coredns-668d6bf9bc-bbdwm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif84bfb3312c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-" Nov 24 00:19:04.807392 containerd[1704]: 2025-11-24 00:19:04.504 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807392 containerd[1704]: 2025-11-24 00:19:04.547 [INFO][4672] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" HandleID="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.547 [INFO][4672] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" HandleID="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"coredns-668d6bf9bc-bbdwm", "timestamp":"2025-11-24 00:19:04.547140929 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.547 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.679 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.736 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.745 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.752 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.755 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807564 containerd[1704]: 2025-11-24 00:19:04.758 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.758 [INFO][4672] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.760 [INFO][4672] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.767 [INFO][4672] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.778 [INFO][4672] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.778 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.778 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:04.807774 containerd[1704]: 2025-11-24 00:19:04.778 [INFO][4672] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" HandleID="k8s-pod-network.07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.779 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ceabcd19-99bf-4acc-aafa-9d2516e3bf94", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"coredns-668d6bf9bc-bbdwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif84bfb3312c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.780 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.133/32] ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.780 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif84bfb3312c ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.785 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.786 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ceabcd19-99bf-4acc-aafa-9d2516e3bf94", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee", Pod:"coredns-668d6bf9bc-bbdwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif84bfb3312c", MAC:"d6:02:c9:55:b9:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:04.807930 containerd[1704]: 2025-11-24 00:19:04.803 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbdwm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--bbdwm-eth0" Nov 24 00:19:04.827484 systemd[1]: Started cri-containerd-f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11.scope - libcontainer container f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11. Nov 24 00:19:04.870375 containerd[1704]: time="2025-11-24T00:19:04.870095153Z" level=info msg="connecting to shim 07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee" address="unix:///run/containerd/s/d76b0726194783266fcb8c5a793ab2811d937fe6e466725640686050610f8278" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:04.896411 systemd[1]: Started cri-containerd-07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee.scope - libcontainer container 07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee. Nov 24 00:19:04.920822 containerd[1704]: time="2025-11-24T00:19:04.920582584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wzcms,Uid:45d4961d-4fb6-4f95-8d11-3d57944631db,Namespace:calico-system,Attempt:0,} returns sandbox id \"f753c220d7360bae61fff2ba15e83f33f47c5d617dabe1e067dc33657523ea11\"" Nov 24 00:19:04.951449 containerd[1704]: time="2025-11-24T00:19:04.951425691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbdwm,Uid:ceabcd19-99bf-4acc-aafa-9d2516e3bf94,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee\"" Nov 24 00:19:04.954034 containerd[1704]: time="2025-11-24T00:19:04.954001241Z" level=info msg="CreateContainer within sandbox \"07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:19:04.976793 containerd[1704]: time="2025-11-24T00:19:04.976767933Z" level=info msg="Container 4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:19:04.993071 containerd[1704]: time="2025-11-24T00:19:04.993047100Z" level=info msg="CreateContainer within sandbox \"07b893f8157d39f9148f604e5c480310fedb769817df3b6b50a0e04438a8c9ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029\"" Nov 24 00:19:04.993413 containerd[1704]: time="2025-11-24T00:19:04.993397068Z" level=info msg="StartContainer for \"4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029\"" Nov 24 00:19:04.994016 containerd[1704]: time="2025-11-24T00:19:04.993994132Z" level=info msg="connecting to shim 4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029" address="unix:///run/containerd/s/d76b0726194783266fcb8c5a793ab2811d937fe6e466725640686050610f8278" protocol=ttrpc version=3 Nov 24 00:19:05.004063 containerd[1704]: time="2025-11-24T00:19:05.004039400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:05.010339 systemd[1]: Started cri-containerd-4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029.scope - libcontainer container 4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029. Nov 24 00:19:05.012051 containerd[1704]: time="2025-11-24T00:19:05.011273850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:19:05.012051 containerd[1704]: time="2025-11-24T00:19:05.011367352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:19:05.012810 kubelet[3165]: E1124 00:19:05.012777 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:05.013025 kubelet[3165]: E1124 00:19:05.012821 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:05.013538 kubelet[3165]: E1124 00:19:05.013497 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:05.013710 containerd[1704]: time="2025-11-24T00:19:05.013679278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:19:05.047866 containerd[1704]: time="2025-11-24T00:19:05.047255275Z" level=info msg="StartContainer for \"4528ec9a2dba8e6869ea843ef515b842210ff4920df9d57cccc7fed914241029\" returns successfully" Nov 24 00:19:05.285314 containerd[1704]: time="2025-11-24T00:19:05.285199931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:05.288930 containerd[1704]: time="2025-11-24T00:19:05.288885398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:19:05.289028 containerd[1704]: time="2025-11-24T00:19:05.288970673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:05.290398 kubelet[3165]: E1124 00:19:05.290312 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:05.290398 kubelet[3165]: E1124 00:19:05.290373 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:05.291177 containerd[1704]: time="2025-11-24T00:19:05.291006363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:19:05.291254 kubelet[3165]: E1124 00:19:05.291110 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmxvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:05.292464 kubelet[3165]: E1124 00:19:05.292242 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:05.411774 containerd[1704]: time="2025-11-24T00:19:05.411729068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b58cdd7d9-2thpc,Uid:8d3a110a-eb9c-4905-82ad-09bfe36d2064,Namespace:calico-system,Attempt:0,}" Nov 24 00:19:05.412272 containerd[1704]: time="2025-11-24T00:19:05.411729095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-q6djh,Uid:d05efb04-97c4-4681-b343-8c87d932c961,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:19:05.543714 systemd-networkd[1340]: cali0a8a98befe9: Link UP Nov 24 00:19:05.544916 systemd-networkd[1340]: cali0a8a98befe9: Gained carrier Nov 24 00:19:05.556250 containerd[1704]: time="2025-11-24T00:19:05.556115393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:05.559520 containerd[1704]: time="2025-11-24T00:19:05.559225937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:19:05.559520 containerd[1704]: time="2025-11-24T00:19:05.559476569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:19:05.559889 kubelet[3165]: E1124 00:19:05.559860 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:05.560344 kubelet[3165]: E1124 00:19:05.560048 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:05.560344 kubelet[3165]: E1124 00:19:05.560205 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:05.561354 kubelet[3165]: E1124 00:19:05.561317 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:05.571951 kubelet[3165]: E1124 00:19:05.571536 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.475 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0 calico-apiserver-d88c99f6b- calico-apiserver d05efb04-97c4-4681-b343-8c87d932c961 839 0 2025-11-24 00:18:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d88c99f6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 calico-apiserver-d88c99f6b-q6djh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a8a98befe9 [] [] }} ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.475 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.507 [INFO][4911] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" HandleID="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.507 [INFO][4911] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" HandleID="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"calico-apiserver-d88c99f6b-q6djh", "timestamp":"2025-11-24 00:19:05.506998155 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.507 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.507 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.507 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.512 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.516 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.520 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.521 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.523 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.524 [INFO][4911] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.525 [INFO][4911] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19 Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.529 [INFO][4911] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4911] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.134/26] block=192.168.52.128/26 handle="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.134/26] handle="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:05.572058 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4911] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.134/26] IPv6=[] ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" HandleID="k8s-pod-network.ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.540 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0", GenerateName:"calico-apiserver-d88c99f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d05efb04-97c4-4681-b343-8c87d932c961", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d88c99f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"calico-apiserver-d88c99f6b-q6djh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a8a98befe9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.540 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.134/32] ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.540 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a8a98befe9 ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.546 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.549 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0", GenerateName:"calico-apiserver-d88c99f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d05efb04-97c4-4681-b343-8c87d932c961", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d88c99f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19", Pod:"calico-apiserver-d88c99f6b-q6djh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a8a98befe9", MAC:"c6:b7:4e:32:94:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:05.572557 containerd[1704]: 2025-11-24 00:19:05.564 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" Namespace="calico-apiserver" Pod="calico-apiserver-d88c99f6b-q6djh" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--apiserver--d88c99f6b--q6djh-eth0" Nov 24 00:19:05.587768 kubelet[3165]: I1124 00:19:05.587719 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bbdwm" podStartSLOduration=42.58770434 podStartE2EDuration="42.58770434s" podCreationTimestamp="2025-11-24 00:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:19:05.587037028 +0000 UTC m=+49.266814276" watchObservedRunningTime="2025-11-24 00:19:05.58770434 +0000 UTC m=+49.267481669" Nov 24 00:19:05.632113 containerd[1704]: time="2025-11-24T00:19:05.631787481Z" level=info msg="connecting to shim ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19" address="unix:///run/containerd/s/744c3764416213100f9015955cbe9e6347ac177ba086791b2d67de94653b1c50" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:05.678529 systemd[1]: Started cri-containerd-ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19.scope - libcontainer container ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19. Nov 24 00:19:05.694030 systemd-networkd[1340]: calie75949a1a93: Link UP Nov 24 00:19:05.695274 systemd-networkd[1340]: calie75949a1a93: Gained carrier Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.471 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0 calico-kube-controllers-7b58cdd7d9- calico-system 8d3a110a-eb9c-4905-82ad-09bfe36d2064 834 0 2025-11-24 00:18:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b58cdd7d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 calico-kube-controllers-7b58cdd7d9-2thpc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie75949a1a93 [] [] }} ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.471 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.511 [INFO][4909] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" HandleID="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.511 [INFO][4909] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" HandleID="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"calico-kube-controllers-7b58cdd7d9-2thpc", "timestamp":"2025-11-24 00:19:05.511365911 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.511 [INFO][4909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.538 [INFO][4909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.613 [INFO][4909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.624 [INFO][4909] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.638 [INFO][4909] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.650 [INFO][4909] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.657 [INFO][4909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.657 [INFO][4909] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.659 [INFO][4909] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93 Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.670 [INFO][4909] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.687 [INFO][4909] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.135/26] block=192.168.52.128/26 handle="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.687 [INFO][4909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.135/26] handle="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.687 [INFO][4909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:05.714027 containerd[1704]: 2025-11-24 00:19:05.687 [INFO][4909] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.135/26] IPv6=[] ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" HandleID="k8s-pod-network.83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.689 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0", GenerateName:"calico-kube-controllers-7b58cdd7d9-", Namespace:"calico-system", SelfLink:"", UID:"8d3a110a-eb9c-4905-82ad-09bfe36d2064", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b58cdd7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"calico-kube-controllers-7b58cdd7d9-2thpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie75949a1a93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.690 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.135/32] ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.690 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie75949a1a93 ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.694 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.696 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0", GenerateName:"calico-kube-controllers-7b58cdd7d9-", Namespace:"calico-system", SelfLink:"", UID:"8d3a110a-eb9c-4905-82ad-09bfe36d2064", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b58cdd7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93", Pod:"calico-kube-controllers-7b58cdd7d9-2thpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie75949a1a93", MAC:"96:96:76:fb:45:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:05.714729 containerd[1704]: 2025-11-24 00:19:05.711 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" Namespace="calico-system" Pod="calico-kube-controllers-7b58cdd7d9-2thpc" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-calico--kube--controllers--7b58cdd7d9--2thpc-eth0" Nov 24 00:19:05.758678 containerd[1704]: time="2025-11-24T00:19:05.757857574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d88c99f6b-q6djh,Uid:d05efb04-97c4-4681-b343-8c87d932c961,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ac82baf89735679c9e5d9470b763b0e1fddd0feb5639321ea374ed6a3a61fe19\"" Nov 24 00:19:05.760628 containerd[1704]: time="2025-11-24T00:19:05.760455172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:05.767312 containerd[1704]: time="2025-11-24T00:19:05.767275579Z" level=info msg="connecting to shim 83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93" address="unix:///run/containerd/s/e40c28284de511eac51862a81956995ef6d95ca25f38444c09c032acae8e7561" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:05.794453 systemd[1]: Started cri-containerd-83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93.scope - libcontainer container 83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93. Nov 24 00:19:05.842457 containerd[1704]: time="2025-11-24T00:19:05.842382962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b58cdd7d9-2thpc,Uid:8d3a110a-eb9c-4905-82ad-09bfe36d2064,Namespace:calico-system,Attempt:0,} returns sandbox id \"83c0439f322abfe49afdbc41a66117c734de7f7ce754c4dc255b76cc18f93d93\"" Nov 24 00:19:06.016781 systemd-networkd[1340]: califbc3449359e: Gained IPv6LL Nov 24 00:19:06.022375 containerd[1704]: time="2025-11-24T00:19:06.022336868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:06.025644 containerd[1704]: time="2025-11-24T00:19:06.025616908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:06.025771 containerd[1704]: time="2025-11-24T00:19:06.025652581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:06.026069 kubelet[3165]: E1124 00:19:06.025856 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:06.026069 kubelet[3165]: E1124 00:19:06.025896 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:06.026520 kubelet[3165]: E1124 00:19:06.026148 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:06.026682 containerd[1704]: time="2025-11-24T00:19:06.026315476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:19:06.027984 kubelet[3165]: E1124 00:19:06.027924 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:06.294382 containerd[1704]: time="2025-11-24T00:19:06.294337332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:06.297526 containerd[1704]: time="2025-11-24T00:19:06.297494589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:19:06.297644 containerd[1704]: time="2025-11-24T00:19:06.297550292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:06.297801 kubelet[3165]: E1124 00:19:06.297733 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:06.297856 kubelet[3165]: E1124 00:19:06.297822 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:06.298197 kubelet[3165]: E1124 00:19:06.297982 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdxwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:06.299525 kubelet[3165]: E1124 00:19:06.299490 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:06.336703 systemd-networkd[1340]: cali27db3273828: Gained IPv6LL Nov 24 00:19:06.412519 containerd[1704]: time="2025-11-24T00:19:06.412439034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d8vnm,Uid:364daebb-f821-452e-8e18-337f9a9c926f,Namespace:kube-system,Attempt:0,}" Nov 24 00:19:06.511904 systemd-networkd[1340]: calif73bf530a51: Link UP Nov 24 00:19:06.512359 systemd-networkd[1340]: calif73bf530a51: Gained carrier Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.448 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0 coredns-668d6bf9bc- kube-system 364daebb-f821-452e-8e18-337f9a9c926f 830 0 2025-11-24 00:18:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-8bf8e53aa8 coredns-668d6bf9bc-d8vnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif73bf530a51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.448 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.474 [INFO][5046] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" HandleID="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.474 [INFO][5046] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" HandleID="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-8bf8e53aa8", "pod":"coredns-668d6bf9bc-d8vnm", "timestamp":"2025-11-24 00:19:06.474609987 +0000 UTC"}, Hostname:"ci-4459.2.1-a-8bf8e53aa8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.476 [INFO][5046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.476 [INFO][5046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.476 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-8bf8e53aa8' Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.484 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.487 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.490 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.491 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.493 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.493 [INFO][5046] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.495 [INFO][5046] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.500 [INFO][5046] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.508 [INFO][5046] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.52.136/26] block=192.168.52.128/26 handle="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.508 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.52.136/26] handle="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" host="ci-4459.2.1-a-8bf8e53aa8" Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.508 [INFO][5046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:19:06.528130 containerd[1704]: 2025-11-24 00:19:06.508 [INFO][5046] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.52.136/26] IPv6=[] ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" HandleID="k8s-pod-network.6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Workload="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.509 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"364daebb-f821-452e-8e18-337f9a9c926f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"", Pod:"coredns-668d6bf9bc-d8vnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif73bf530a51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.509 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.136/32] ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.509 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif73bf530a51 ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.512 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.514 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"364daebb-f821-452e-8e18-337f9a9c926f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-8bf8e53aa8", ContainerID:"6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa", Pod:"coredns-668d6bf9bc-d8vnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif73bf530a51", MAC:"9a:cd:32:32:b3:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:19:06.528591 containerd[1704]: 2025-11-24 00:19:06.526 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-d8vnm" WorkloadEndpoint="ci--4459.2.1--a--8bf8e53aa8-k8s-coredns--668d6bf9bc--d8vnm-eth0" Nov 24 00:19:06.583243 kubelet[3165]: E1124 00:19:06.582394 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:06.585525 containerd[1704]: time="2025-11-24T00:19:06.585425468Z" level=info msg="connecting to shim 6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa" address="unix:///run/containerd/s/9885ce9a8756a42024c777506621d92a9c905243548b4b0ea946d3006659707a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:19:06.592266 systemd-networkd[1340]: calif84bfb3312c: Gained IPv6LL Nov 24 00:19:06.598998 kubelet[3165]: E1124 00:19:06.598970 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:06.599117 kubelet[3165]: E1124 00:19:06.599049 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:06.600324 kubelet[3165]: E1124 00:19:06.600292 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:06.626325 systemd[1]: Started cri-containerd-6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa.scope - libcontainer container 6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa. Nov 24 00:19:06.705660 containerd[1704]: time="2025-11-24T00:19:06.705565540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d8vnm,Uid:364daebb-f821-452e-8e18-337f9a9c926f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa\"" Nov 24 00:19:06.708360 containerd[1704]: time="2025-11-24T00:19:06.708278787Z" level=info msg="CreateContainer within sandbox \"6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:19:06.735576 containerd[1704]: time="2025-11-24T00:19:06.734447918Z" level=info msg="Container 32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:19:06.749367 containerd[1704]: time="2025-11-24T00:19:06.749340761Z" level=info msg="CreateContainer within sandbox \"6439c4d22179c33b5c74cece850b5b9922879ec5441741982ee5a6d11b05f2fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077\"" Nov 24 00:19:06.749845 containerd[1704]: time="2025-11-24T00:19:06.749820657Z" level=info msg="StartContainer for \"32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077\"" Nov 24 00:19:06.750773 containerd[1704]: time="2025-11-24T00:19:06.750735386Z" level=info msg="connecting to shim 32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077" address="unix:///run/containerd/s/9885ce9a8756a42024c777506621d92a9c905243548b4b0ea946d3006659707a" protocol=ttrpc version=3 Nov 24 00:19:06.765324 systemd[1]: Started cri-containerd-32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077.scope - libcontainer container 32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077. Nov 24 00:19:06.798178 containerd[1704]: time="2025-11-24T00:19:06.795638922Z" level=info msg="StartContainer for \"32f007ce732f041f0596570ed7c2a2872197485aaaeaf6079c058c9004d91077\" returns successfully" Nov 24 00:19:07.040480 systemd-networkd[1340]: cali0a8a98befe9: Gained IPv6LL Nov 24 00:19:07.104297 systemd-networkd[1340]: calie75949a1a93: Gained IPv6LL Nov 24 00:19:07.601029 kubelet[3165]: E1124 00:19:07.600616 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:07.603492 kubelet[3165]: E1124 00:19:07.603371 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:07.603820 kubelet[3165]: E1124 00:19:07.603791 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:07.649709 kubelet[3165]: I1124 00:19:07.649665 3165 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d8vnm" podStartSLOduration=44.649650215 podStartE2EDuration="44.649650215s" podCreationTimestamp="2025-11-24 00:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:19:07.649448865 +0000 UTC m=+51.329226110" watchObservedRunningTime="2025-11-24 00:19:07.649650215 +0000 UTC m=+51.329427459" Nov 24 00:19:08.448310 systemd-networkd[1340]: calif73bf530a51: Gained IPv6LL Nov 24 00:19:11.411890 containerd[1704]: time="2025-11-24T00:19:11.411774356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:19:11.674550 containerd[1704]: time="2025-11-24T00:19:11.674425850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:11.677506 containerd[1704]: time="2025-11-24T00:19:11.677459060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:19:11.677587 containerd[1704]: time="2025-11-24T00:19:11.677465549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:19:11.677756 kubelet[3165]: E1124 00:19:11.677723 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:11.678034 kubelet[3165]: E1124 00:19:11.677769 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:11.678034 kubelet[3165]: E1124 00:19:11.677888 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6fa07e6ad45646be8de5ce808d3bf5bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:11.680478 containerd[1704]: time="2025-11-24T00:19:11.680440583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:19:11.950707 containerd[1704]: time="2025-11-24T00:19:11.950575377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:11.953818 containerd[1704]: time="2025-11-24T00:19:11.953768748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:19:11.953886 containerd[1704]: time="2025-11-24T00:19:11.953873382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:11.954043 kubelet[3165]: E1124 00:19:11.954004 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:11.954091 kubelet[3165]: E1124 00:19:11.954049 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:11.954530 kubelet[3165]: E1124 00:19:11.954258 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:11.955484 kubelet[3165]: E1124 00:19:11.955450 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:19:18.411658 containerd[1704]: time="2025-11-24T00:19:18.411590889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:18.677039 containerd[1704]: time="2025-11-24T00:19:18.676897434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:18.680225 containerd[1704]: time="2025-11-24T00:19:18.680192570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:18.680225 containerd[1704]: time="2025-11-24T00:19:18.680245010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:18.680418 kubelet[3165]: E1124 00:19:18.680367 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:18.680710 kubelet[3165]: E1124 00:19:18.680427 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:18.680710 kubelet[3165]: E1124 00:19:18.680556 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kh2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:18.682072 kubelet[3165]: E1124 00:19:18.682013 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:20.413618 containerd[1704]: time="2025-11-24T00:19:20.413553215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:19:20.686620 containerd[1704]: time="2025-11-24T00:19:20.686478877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:20.689801 containerd[1704]: time="2025-11-24T00:19:20.689768743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:19:20.689801 containerd[1704]: time="2025-11-24T00:19:20.689819872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:20.689983 kubelet[3165]: E1124 00:19:20.689935 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:20.690363 kubelet[3165]: E1124 00:19:20.689994 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:20.690363 kubelet[3165]: E1124 00:19:20.690229 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdxwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:20.690523 containerd[1704]: time="2025-11-24T00:19:20.690446049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:20.691856 kubelet[3165]: E1124 00:19:20.691821 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:20.959262 containerd[1704]: time="2025-11-24T00:19:20.959026369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:20.962152 containerd[1704]: time="2025-11-24T00:19:20.962106914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:20.962523 containerd[1704]: time="2025-11-24T00:19:20.962112627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:20.962588 kubelet[3165]: E1124 00:19:20.962322 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:20.962588 kubelet[3165]: E1124 00:19:20.962364 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:20.962588 kubelet[3165]: E1124 00:19:20.962539 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:20.963108 containerd[1704]: time="2025-11-24T00:19:20.963031255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:19:20.964132 kubelet[3165]: E1124 00:19:20.964095 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:21.230039 containerd[1704]: time="2025-11-24T00:19:21.229909483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:21.233213 containerd[1704]: time="2025-11-24T00:19:21.233135616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:19:21.233213 containerd[1704]: time="2025-11-24T00:19:21.233186328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:19:21.233388 kubelet[3165]: E1124 00:19:21.233350 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:21.233463 kubelet[3165]: E1124 00:19:21.233401 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:21.233559 kubelet[3165]: E1124 00:19:21.233531 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:21.235749 containerd[1704]: time="2025-11-24T00:19:21.235724110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:19:21.491434 containerd[1704]: time="2025-11-24T00:19:21.491312606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:21.494925 containerd[1704]: time="2025-11-24T00:19:21.494889847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:19:21.495074 containerd[1704]: time="2025-11-24T00:19:21.494966939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:19:21.495139 kubelet[3165]: E1124 00:19:21.495091 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:21.495195 kubelet[3165]: E1124 00:19:21.495150 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:21.495638 kubelet[3165]: E1124 00:19:21.495346 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:21.496606 kubelet[3165]: E1124 00:19:21.496550 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:22.412429 containerd[1704]: time="2025-11-24T00:19:22.412310640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:19:22.693344 containerd[1704]: time="2025-11-24T00:19:22.693217434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:22.696605 containerd[1704]: time="2025-11-24T00:19:22.696564594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:19:22.696698 containerd[1704]: time="2025-11-24T00:19:22.696565928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:22.696846 kubelet[3165]: E1124 00:19:22.696811 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:22.697099 kubelet[3165]: E1124 00:19:22.696854 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:22.697099 kubelet[3165]: E1124 00:19:22.697013 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmxvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:22.698568 kubelet[3165]: E1124 00:19:22.698524 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:26.413701 kubelet[3165]: E1124 00:19:26.413338 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:19:32.416183 kubelet[3165]: E1124 00:19:32.416068 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:33.413912 kubelet[3165]: E1124 00:19:33.413864 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:33.414732 kubelet[3165]: E1124 00:19:33.414703 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:34.412303 kubelet[3165]: E1124 00:19:34.412219 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:36.414646 kubelet[3165]: E1124 00:19:36.414503 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:41.414207 containerd[1704]: time="2025-11-24T00:19:41.413233039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:19:41.710209 containerd[1704]: time="2025-11-24T00:19:41.710059723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:41.713744 containerd[1704]: time="2025-11-24T00:19:41.713670212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:19:41.713744 containerd[1704]: time="2025-11-24T00:19:41.713813784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:19:41.714179 kubelet[3165]: E1124 00:19:41.714108 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:41.714712 kubelet[3165]: E1124 00:19:41.714295 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:41.714712 kubelet[3165]: E1124 00:19:41.714422 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6fa07e6ad45646be8de5ce808d3bf5bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:41.716927 containerd[1704]: time="2025-11-24T00:19:41.716884164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:19:41.981991 containerd[1704]: time="2025-11-24T00:19:41.981825048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:41.984898 containerd[1704]: time="2025-11-24T00:19:41.984858210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:19:41.984987 containerd[1704]: time="2025-11-24T00:19:41.984962282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:41.985136 kubelet[3165]: E1124 00:19:41.985104 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:41.985229 kubelet[3165]: E1124 00:19:41.985150 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:41.985316 kubelet[3165]: E1124 00:19:41.985289 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:41.986780 kubelet[3165]: E1124 00:19:41.986732 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:19:44.416012 containerd[1704]: time="2025-11-24T00:19:44.415966612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:19:44.675658 containerd[1704]: time="2025-11-24T00:19:44.675534635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:44.679240 containerd[1704]: time="2025-11-24T00:19:44.679200616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:19:44.679360 containerd[1704]: time="2025-11-24T00:19:44.679285174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:44.681348 kubelet[3165]: E1124 00:19:44.681308 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:44.681679 kubelet[3165]: E1124 00:19:44.681359 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:44.681679 kubelet[3165]: E1124 00:19:44.681495 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmxvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:44.683001 kubelet[3165]: E1124 00:19:44.682966 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:19:46.414859 containerd[1704]: time="2025-11-24T00:19:46.414215589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:19:46.684734 containerd[1704]: time="2025-11-24T00:19:46.684617632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:46.688127 containerd[1704]: time="2025-11-24T00:19:46.687928298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:19:46.688127 containerd[1704]: time="2025-11-24T00:19:46.688026286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:19:46.688344 kubelet[3165]: E1124 00:19:46.688227 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:46.688344 kubelet[3165]: E1124 00:19:46.688284 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:46.688639 kubelet[3165]: E1124 00:19:46.688410 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:46.691651 containerd[1704]: time="2025-11-24T00:19:46.691595547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:19:46.975277 containerd[1704]: time="2025-11-24T00:19:46.974937519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:46.978421 containerd[1704]: time="2025-11-24T00:19:46.978372770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:19:46.978657 containerd[1704]: time="2025-11-24T00:19:46.978407131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:19:46.978724 kubelet[3165]: E1124 00:19:46.978684 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:46.978769 kubelet[3165]: E1124 00:19:46.978731 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:46.978886 kubelet[3165]: E1124 00:19:46.978849 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:46.980424 kubelet[3165]: E1124 00:19:46.980349 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:19:47.413212 containerd[1704]: time="2025-11-24T00:19:47.413091708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:47.678592 containerd[1704]: time="2025-11-24T00:19:47.678462572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:47.685434 containerd[1704]: time="2025-11-24T00:19:47.685380699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:47.685576 containerd[1704]: time="2025-11-24T00:19:47.685553497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:47.687311 kubelet[3165]: E1124 00:19:47.685712 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:47.687389 kubelet[3165]: E1124 00:19:47.687327 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:47.687828 kubelet[3165]: E1124 00:19:47.687581 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kh2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:47.687972 containerd[1704]: time="2025-11-24T00:19:47.687616339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:19:47.689133 kubelet[3165]: E1124 00:19:47.689098 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:19:47.962943 containerd[1704]: time="2025-11-24T00:19:47.962822399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:47.966391 containerd[1704]: time="2025-11-24T00:19:47.966237730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:19:47.966391 containerd[1704]: time="2025-11-24T00:19:47.966346568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:47.966552 kubelet[3165]: E1124 00:19:47.966520 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:47.966597 kubelet[3165]: E1124 00:19:47.966568 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:47.966879 kubelet[3165]: E1124 00:19:47.966824 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdxwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:47.968099 kubelet[3165]: E1124 00:19:47.968068 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:48.412583 containerd[1704]: time="2025-11-24T00:19:48.412507359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:48.670006 containerd[1704]: time="2025-11-24T00:19:48.669882343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:48.673488 containerd[1704]: time="2025-11-24T00:19:48.673355934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:48.673488 containerd[1704]: time="2025-11-24T00:19:48.673458691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:48.674368 kubelet[3165]: E1124 00:19:48.673642 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:48.674368 kubelet[3165]: E1124 00:19:48.673688 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:48.674368 kubelet[3165]: E1124 00:19:48.673827 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:48.675735 kubelet[3165]: E1124 00:19:48.675695 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:19:57.413683 kubelet[3165]: E1124 00:19:57.413632 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:19:59.413088 kubelet[3165]: E1124 00:19:59.413028 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:19:59.414214 kubelet[3165]: E1124 00:19:59.413536 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:20:00.414613 kubelet[3165]: E1124 00:20:00.414500 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:20:01.412265 kubelet[3165]: E1124 00:20:01.411995 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:20:02.413388 kubelet[3165]: E1124 00:20:02.412653 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:20:04.746182 systemd[1]: Started sshd@7-10.200.4.12:22-10.200.16.10:51550.service - OpenSSH per-connection server daemon (10.200.16.10:51550). Nov 24 00:20:05.359216 sshd[5238]: Accepted publickey for core from 10.200.16.10 port 51550 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:05.359849 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:05.364246 systemd-logind[1680]: New session 10 of user core. Nov 24 00:20:05.370310 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:20:05.869847 sshd[5241]: Connection closed by 10.200.16.10 port 51550 Nov 24 00:20:05.871502 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:05.876734 systemd-logind[1680]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:20:05.877825 systemd[1]: sshd@7-10.200.4.12:22-10.200.16.10:51550.service: Deactivated successfully. Nov 24 00:20:05.880727 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:20:05.883834 systemd-logind[1680]: Removed session 10. Nov 24 00:20:10.980759 systemd[1]: Started sshd@8-10.200.4.12:22-10.200.16.10:34378.service - OpenSSH per-connection server daemon (10.200.16.10:34378). Nov 24 00:20:11.412487 kubelet[3165]: E1124 00:20:11.412421 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:20:11.414972 kubelet[3165]: E1124 00:20:11.414929 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:20:11.577385 sshd[5254]: Accepted publickey for core from 10.200.16.10 port 34378 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:11.579324 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:11.584878 systemd-logind[1680]: New session 11 of user core. Nov 24 00:20:11.593344 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:20:12.121058 sshd[5257]: Connection closed by 10.200.16.10 port 34378 Nov 24 00:20:12.123355 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:12.128144 systemd-logind[1680]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:20:12.130491 systemd[1]: sshd@8-10.200.4.12:22-10.200.16.10:34378.service: Deactivated successfully. Nov 24 00:20:12.133108 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:20:12.137006 systemd-logind[1680]: Removed session 11. Nov 24 00:20:12.420190 kubelet[3165]: E1124 00:20:12.419704 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:20:13.413332 kubelet[3165]: E1124 00:20:13.413018 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:20:14.412804 kubelet[3165]: E1124 00:20:14.412202 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:20:15.412864 kubelet[3165]: E1124 00:20:15.412317 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:20:17.231425 systemd[1]: Started sshd@9-10.200.4.12:22-10.200.16.10:34382.service - OpenSSH per-connection server daemon (10.200.16.10:34382). Nov 24 00:20:17.829186 sshd[5272]: Accepted publickey for core from 10.200.16.10 port 34382 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:17.829986 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:17.835333 systemd-logind[1680]: New session 12 of user core. Nov 24 00:20:17.843351 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:20:18.390031 sshd[5275]: Connection closed by 10.200.16.10 port 34382 Nov 24 00:20:18.392126 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:18.396447 systemd-logind[1680]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:20:18.397322 systemd[1]: sshd@9-10.200.4.12:22-10.200.16.10:34382.service: Deactivated successfully. Nov 24 00:20:18.399517 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:20:18.402282 systemd-logind[1680]: Removed session 12. Nov 24 00:20:18.496414 systemd[1]: Started sshd@10-10.200.4.12:22-10.200.16.10:34384.service - OpenSSH per-connection server daemon (10.200.16.10:34384). Nov 24 00:20:19.105218 sshd[5294]: Accepted publickey for core from 10.200.16.10 port 34384 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:19.106400 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:19.110241 systemd-logind[1680]: New session 13 of user core. Nov 24 00:20:19.115348 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:20:19.621121 sshd[5297]: Connection closed by 10.200.16.10 port 34384 Nov 24 00:20:19.621727 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:19.627369 systemd-logind[1680]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:20:19.628492 systemd[1]: sshd@10-10.200.4.12:22-10.200.16.10:34384.service: Deactivated successfully. Nov 24 00:20:19.631432 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:20:19.634089 systemd-logind[1680]: Removed session 13. Nov 24 00:20:19.727281 systemd[1]: Started sshd@11-10.200.4.12:22-10.200.16.10:34388.service - OpenSSH per-connection server daemon (10.200.16.10:34388). Nov 24 00:20:20.340766 sshd[5307]: Accepted publickey for core from 10.200.16.10 port 34388 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:20.341810 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:20.348390 systemd-logind[1680]: New session 14 of user core. Nov 24 00:20:20.351666 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:20:20.825065 sshd[5310]: Connection closed by 10.200.16.10 port 34388 Nov 24 00:20:20.827360 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:20.831335 systemd[1]: sshd@11-10.200.4.12:22-10.200.16.10:34388.service: Deactivated successfully. Nov 24 00:20:20.833172 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:20:20.833908 systemd-logind[1680]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:20:20.835207 systemd-logind[1680]: Removed session 14. Nov 24 00:20:24.412379 kubelet[3165]: E1124 00:20:24.411836 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:20:25.412364 kubelet[3165]: E1124 00:20:25.412317 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:20:25.413420 containerd[1704]: time="2025-11-24T00:20:25.413361422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:20:25.667041 containerd[1704]: time="2025-11-24T00:20:25.666902506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:25.670197 containerd[1704]: time="2025-11-24T00:20:25.670142660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:20:25.670287 containerd[1704]: time="2025-11-24T00:20:25.670238799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:20:25.670418 kubelet[3165]: E1124 00:20:25.670384 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:20:25.670704 kubelet[3165]: E1124 00:20:25.670431 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:20:25.670704 kubelet[3165]: E1124 00:20:25.670551 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6fa07e6ad45646be8de5ce808d3bf5bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:25.674028 containerd[1704]: time="2025-11-24T00:20:25.673976980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:20:25.931361 systemd[1]: Started sshd@12-10.200.4.12:22-10.200.16.10:33364.service - OpenSSH per-connection server daemon (10.200.16.10:33364). Nov 24 00:20:25.937256 containerd[1704]: time="2025-11-24T00:20:25.937214448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:25.940896 containerd[1704]: time="2025-11-24T00:20:25.940778268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:20:25.940896 containerd[1704]: time="2025-11-24T00:20:25.940873484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:20:25.941731 kubelet[3165]: E1124 00:20:25.941115 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:20:25.941731 kubelet[3165]: E1124 00:20:25.941180 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:20:25.941731 kubelet[3165]: E1124 00:20:25.941298 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c79b749df-5s9h6_calico-system(55ebf745-9192-40af-99eb-e78240db2491): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:25.942796 kubelet[3165]: E1124 00:20:25.942758 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:20:26.417197 kubelet[3165]: E1124 00:20:26.417055 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:20:26.520507 sshd[5327]: Accepted publickey for core from 10.200.16.10 port 33364 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:26.521877 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:26.527303 systemd-logind[1680]: New session 15 of user core. Nov 24 00:20:26.533356 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:20:27.022211 sshd[5330]: Connection closed by 10.200.16.10 port 33364 Nov 24 00:20:27.023245 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:27.027297 systemd[1]: sshd@12-10.200.4.12:22-10.200.16.10:33364.service: Deactivated successfully. Nov 24 00:20:27.029353 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:20:27.031104 systemd-logind[1680]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:20:27.032710 systemd-logind[1680]: Removed session 15. Nov 24 00:20:27.411496 kubelet[3165]: E1124 00:20:27.411444 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:20:30.412298 containerd[1704]: time="2025-11-24T00:20:30.412180312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:20:30.698745 containerd[1704]: time="2025-11-24T00:20:30.698576874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:30.701927 containerd[1704]: time="2025-11-24T00:20:30.701885576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:20:30.702087 containerd[1704]: time="2025-11-24T00:20:30.701910124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:20:30.702220 kubelet[3165]: E1124 00:20:30.702173 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:20:30.702559 kubelet[3165]: E1124 00:20:30.702230 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:20:30.702559 kubelet[3165]: E1124 00:20:30.702356 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kh2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-g5jrw_calico-apiserver(59c9a609-1992-4463-b755-389571dcaa93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:30.703851 kubelet[3165]: E1124 00:20:30.703807 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:20:32.146386 systemd[1]: Started sshd@13-10.200.4.12:22-10.200.16.10:60794.service - OpenSSH per-connection server daemon (10.200.16.10:60794). Nov 24 00:20:32.791193 sshd[5377]: Accepted publickey for core from 10.200.16.10 port 60794 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:32.792352 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:32.796821 systemd-logind[1680]: New session 16 of user core. Nov 24 00:20:32.803316 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:20:33.289486 sshd[5394]: Connection closed by 10.200.16.10 port 60794 Nov 24 00:20:33.291357 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:33.294759 systemd[1]: sshd@13-10.200.4.12:22-10.200.16.10:60794.service: Deactivated successfully. Nov 24 00:20:33.295057 systemd-logind[1680]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:20:33.296748 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:20:33.298523 systemd-logind[1680]: Removed session 16. Nov 24 00:20:35.411909 containerd[1704]: time="2025-11-24T00:20:35.411869778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:20:35.681833 containerd[1704]: time="2025-11-24T00:20:35.681697671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:35.685733 containerd[1704]: time="2025-11-24T00:20:35.685197059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:20:35.685951 containerd[1704]: time="2025-11-24T00:20:35.685350509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:20:35.686173 kubelet[3165]: E1124 00:20:35.686129 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:20:35.687710 kubelet[3165]: E1124 00:20:35.686493 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:20:35.687710 kubelet[3165]: E1124 00:20:35.686658 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmxvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wzcms_calico-system(45d4961d-4fb6-4f95-8d11-3d57944631db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:35.687984 kubelet[3165]: E1124 00:20:35.687958 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:20:37.412350 containerd[1704]: time="2025-11-24T00:20:37.412219716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:20:37.671122 containerd[1704]: time="2025-11-24T00:20:37.670676368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:37.674437 containerd[1704]: time="2025-11-24T00:20:37.674304008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:20:37.674437 containerd[1704]: time="2025-11-24T00:20:37.674405051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:20:37.674740 kubelet[3165]: E1124 00:20:37.674705 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:20:37.675293 kubelet[3165]: E1124 00:20:37.674996 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:20:37.675502 kubelet[3165]: E1124 00:20:37.675354 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdxwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b58cdd7d9-2thpc_calico-system(8d3a110a-eb9c-4905-82ad-09bfe36d2064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:37.676590 kubelet[3165]: E1124 00:20:37.676553 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:20:38.398417 systemd[1]: Started sshd@14-10.200.4.12:22-10.200.16.10:60804.service - OpenSSH per-connection server daemon (10.200.16.10:60804). Nov 24 00:20:38.418476 kubelet[3165]: E1124 00:20:38.417973 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:20:38.997536 sshd[5408]: Accepted publickey for core from 10.200.16.10 port 60804 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:38.999859 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:39.005922 systemd-logind[1680]: New session 17 of user core. Nov 24 00:20:39.012333 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:20:39.513254 sshd[5411]: Connection closed by 10.200.16.10 port 60804 Nov 24 00:20:39.514330 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:39.519775 systemd[1]: sshd@14-10.200.4.12:22-10.200.16.10:60804.service: Deactivated successfully. Nov 24 00:20:39.522324 systemd-logind[1680]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:20:39.524415 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:20:39.528720 systemd-logind[1680]: Removed session 17. Nov 24 00:20:39.619032 systemd[1]: Started sshd@15-10.200.4.12:22-10.200.16.10:60808.service - OpenSSH per-connection server daemon (10.200.16.10:60808). Nov 24 00:20:40.217238 sshd[5423]: Accepted publickey for core from 10.200.16.10 port 60808 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:40.218331 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:40.222664 systemd-logind[1680]: New session 18 of user core. Nov 24 00:20:40.226380 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:20:40.412851 containerd[1704]: time="2025-11-24T00:20:40.412814201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:20:40.675473 containerd[1704]: time="2025-11-24T00:20:40.675323018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:40.679270 containerd[1704]: time="2025-11-24T00:20:40.679180640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:20:40.679270 containerd[1704]: time="2025-11-24T00:20:40.679268858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:20:40.679419 kubelet[3165]: E1124 00:20:40.679382 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:20:40.679709 kubelet[3165]: E1124 00:20:40.679434 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:20:40.679709 kubelet[3165]: E1124 00:20:40.679657 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:40.680446 containerd[1704]: time="2025-11-24T00:20:40.680412301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:20:40.771122 sshd[5426]: Connection closed by 10.200.16.10 port 60808 Nov 24 00:20:40.771450 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:40.775568 systemd-logind[1680]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:20:40.777544 systemd[1]: sshd@15-10.200.4.12:22-10.200.16.10:60808.service: Deactivated successfully. Nov 24 00:20:40.781277 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:20:40.785912 systemd-logind[1680]: Removed session 18. Nov 24 00:20:40.881402 systemd[1]: Started sshd@16-10.200.4.12:22-10.200.16.10:36482.service - OpenSSH per-connection server daemon (10.200.16.10:36482). Nov 24 00:20:40.944872 containerd[1704]: time="2025-11-24T00:20:40.944304469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:40.950031 containerd[1704]: time="2025-11-24T00:20:40.949863520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:20:40.950290 containerd[1704]: time="2025-11-24T00:20:40.949993403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:20:40.950597 kubelet[3165]: E1124 00:20:40.950382 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:20:40.950597 kubelet[3165]: E1124 00:20:40.950433 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:20:40.950942 containerd[1704]: time="2025-11-24T00:20:40.950843580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:20:40.951348 kubelet[3165]: E1124 00:20:40.951307 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d88c99f6b-q6djh_calico-apiserver(d05efb04-97c4-4681-b343-8c87d932c961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:40.952864 kubelet[3165]: E1124 00:20:40.952820 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:20:41.221592 containerd[1704]: time="2025-11-24T00:20:41.221349303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:20:41.227119 containerd[1704]: time="2025-11-24T00:20:41.226992864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:20:41.227119 containerd[1704]: time="2025-11-24T00:20:41.227088131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:20:41.227368 kubelet[3165]: E1124 00:20:41.227332 3165 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:20:41.227416 kubelet[3165]: E1124 00:20:41.227392 3165 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:20:41.227557 kubelet[3165]: E1124 00:20:41.227517 3165 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z6pwc_calico-system(377ffa75-e56f-4a86-9355-a323312d6a89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:20:41.229118 kubelet[3165]: E1124 00:20:41.229069 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:20:41.488071 sshd[5436]: Accepted publickey for core from 10.200.16.10 port 36482 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:41.488876 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:41.492700 systemd-logind[1680]: New session 19 of user core. Nov 24 00:20:41.499318 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:20:42.412909 kubelet[3165]: E1124 00:20:42.412859 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:20:42.415545 sshd[5439]: Connection closed by 10.200.16.10 port 36482 Nov 24 00:20:42.417994 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:42.421454 systemd-logind[1680]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:20:42.423460 systemd[1]: sshd@16-10.200.4.12:22-10.200.16.10:36482.service: Deactivated successfully. Nov 24 00:20:42.426925 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:20:42.431775 systemd-logind[1680]: Removed session 19. Nov 24 00:20:42.530405 systemd[1]: Started sshd@17-10.200.4.12:22-10.200.16.10:36490.service - OpenSSH per-connection server daemon (10.200.16.10:36490). Nov 24 00:20:43.137085 sshd[5456]: Accepted publickey for core from 10.200.16.10 port 36490 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:43.138220 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:43.142858 systemd-logind[1680]: New session 20 of user core. Nov 24 00:20:43.145369 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:20:43.760224 sshd[5459]: Connection closed by 10.200.16.10 port 36490 Nov 24 00:20:43.763350 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:43.766868 systemd-logind[1680]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:20:43.767686 systemd[1]: sshd@17-10.200.4.12:22-10.200.16.10:36490.service: Deactivated successfully. Nov 24 00:20:43.770764 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:20:43.775022 systemd-logind[1680]: Removed session 20. Nov 24 00:20:43.868390 systemd[1]: Started sshd@18-10.200.4.12:22-10.200.16.10:36498.service - OpenSSH per-connection server daemon (10.200.16.10:36498). Nov 24 00:20:44.473973 sshd[5469]: Accepted publickey for core from 10.200.16.10 port 36498 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:44.475226 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:44.479597 systemd-logind[1680]: New session 21 of user core. Nov 24 00:20:44.485307 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:20:44.976319 sshd[5472]: Connection closed by 10.200.16.10 port 36498 Nov 24 00:20:44.976910 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:44.983173 systemd[1]: sshd@18-10.200.4.12:22-10.200.16.10:36498.service: Deactivated successfully. Nov 24 00:20:44.986860 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:20:44.988711 systemd-logind[1680]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:20:44.991050 systemd-logind[1680]: Removed session 21. Nov 24 00:20:49.412925 kubelet[3165]: E1124 00:20:49.412824 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:20:50.087519 systemd[1]: Started sshd@19-10.200.4.12:22-10.200.16.10:50744.service - OpenSSH per-connection server daemon (10.200.16.10:50744). Nov 24 00:20:50.415184 kubelet[3165]: E1124 00:20:50.415070 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:20:50.416813 kubelet[3165]: E1124 00:20:50.415764 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:20:50.702550 sshd[5484]: Accepted publickey for core from 10.200.16.10 port 50744 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:50.704780 sshd-session[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:50.712380 systemd-logind[1680]: New session 22 of user core. Nov 24 00:20:50.714361 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:20:51.180454 sshd[5487]: Connection closed by 10.200.16.10 port 50744 Nov 24 00:20:51.182055 sshd-session[5484]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:51.184709 systemd[1]: sshd@19-10.200.4.12:22-10.200.16.10:50744.service: Deactivated successfully. Nov 24 00:20:51.186915 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:20:51.188548 systemd-logind[1680]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:20:51.190005 systemd-logind[1680]: Removed session 22. Nov 24 00:20:53.414158 kubelet[3165]: E1124 00:20:53.414108 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:20:53.414602 kubelet[3165]: E1124 00:20:53.414331 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:20:56.288044 systemd[1]: Started sshd@20-10.200.4.12:22-10.200.16.10:50748.service - OpenSSH per-connection server daemon (10.200.16.10:50748). Nov 24 00:20:56.412489 kubelet[3165]: E1124 00:20:56.412411 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:20:56.897826 sshd[5501]: Accepted publickey for core from 10.200.16.10 port 50748 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:20:56.899639 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:20:56.908056 systemd-logind[1680]: New session 23 of user core. Nov 24 00:20:56.914546 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:20:57.379453 sshd[5504]: Connection closed by 10.200.16.10 port 50748 Nov 24 00:20:57.380973 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Nov 24 00:20:57.384538 systemd-logind[1680]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:20:57.384835 systemd[1]: sshd@20-10.200.4.12:22-10.200.16.10:50748.service: Deactivated successfully. Nov 24 00:20:57.386628 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:20:57.388132 systemd-logind[1680]: Removed session 23. Nov 24 00:21:00.412830 kubelet[3165]: E1124 00:21:00.412637 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:21:01.412527 kubelet[3165]: E1124 00:21:01.412461 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:21:02.486407 systemd[1]: Started sshd@21-10.200.4.12:22-10.200.16.10:49118.service - OpenSSH per-connection server daemon (10.200.16.10:49118). Nov 24 00:21:03.084437 sshd[5539]: Accepted publickey for core from 10.200.16.10 port 49118 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:03.085962 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:03.090845 systemd-logind[1680]: New session 24 of user core. Nov 24 00:21:03.097463 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:21:03.598377 sshd[5542]: Connection closed by 10.200.16.10 port 49118 Nov 24 00:21:03.599406 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:03.605366 systemd[1]: sshd@21-10.200.4.12:22-10.200.16.10:49118.service: Deactivated successfully. Nov 24 00:21:03.605554 systemd-logind[1680]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:21:03.609356 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:21:03.612316 systemd-logind[1680]: Removed session 24. Nov 24 00:21:04.412623 kubelet[3165]: E1124 00:21:04.412485 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:21:04.415847 kubelet[3165]: E1124 00:21:04.415695 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:21:05.411700 kubelet[3165]: E1124 00:21:05.411657 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:21:08.725905 systemd[1]: Started sshd@22-10.200.4.12:22-10.200.16.10:49122.service - OpenSSH per-connection server daemon (10.200.16.10:49122). Nov 24 00:21:09.322699 sshd[5555]: Accepted publickey for core from 10.200.16.10 port 49122 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:09.324456 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:09.330117 systemd-logind[1680]: New session 25 of user core. Nov 24 00:21:09.337341 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:21:09.841347 sshd[5558]: Connection closed by 10.200.16.10 port 49122 Nov 24 00:21:09.842085 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:09.848641 systemd[1]: sshd@22-10.200.4.12:22-10.200.16.10:49122.service: Deactivated successfully. Nov 24 00:21:09.848823 systemd-logind[1680]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:21:09.854255 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:21:09.858115 systemd-logind[1680]: Removed session 25. Nov 24 00:21:11.412430 kubelet[3165]: E1124 00:21:11.412386 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:21:14.950633 systemd[1]: Started sshd@23-10.200.4.12:22-10.200.16.10:39834.service - OpenSSH per-connection server daemon (10.200.16.10:39834). Nov 24 00:21:15.412029 kubelet[3165]: E1124 00:21:15.411949 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:21:15.413112 kubelet[3165]: E1124 00:21:15.413079 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:21:15.551504 sshd[5573]: Accepted publickey for core from 10.200.16.10 port 39834 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:15.553603 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:15.561793 systemd-logind[1680]: New session 26 of user core. Nov 24 00:21:15.566477 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 24 00:21:16.057188 sshd[5576]: Connection closed by 10.200.16.10 port 39834 Nov 24 00:21:16.058180 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:16.061576 systemd-logind[1680]: Session 26 logged out. Waiting for processes to exit. Nov 24 00:21:16.063692 systemd[1]: sshd@23-10.200.4.12:22-10.200.16.10:39834.service: Deactivated successfully. Nov 24 00:21:16.067821 systemd[1]: session-26.scope: Deactivated successfully. Nov 24 00:21:16.070719 systemd-logind[1680]: Removed session 26. Nov 24 00:21:16.413057 kubelet[3165]: E1124 00:21:16.412720 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:21:17.413653 kubelet[3165]: E1124 00:21:17.413593 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:21:20.413683 kubelet[3165]: E1124 00:21:20.413642 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:21:21.164294 systemd[1]: Started sshd@24-10.200.4.12:22-10.200.16.10:41578.service - OpenSSH per-connection server daemon (10.200.16.10:41578). Nov 24 00:21:21.763208 sshd[5590]: Accepted publickey for core from 10.200.16.10 port 41578 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:21.764416 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:21.771747 systemd-logind[1680]: New session 27 of user core. Nov 24 00:21:21.776201 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 24 00:21:22.276074 sshd[5595]: Connection closed by 10.200.16.10 port 41578 Nov 24 00:21:22.278195 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:22.281538 systemd-logind[1680]: Session 27 logged out. Waiting for processes to exit. Nov 24 00:21:22.283561 systemd[1]: sshd@24-10.200.4.12:22-10.200.16.10:41578.service: Deactivated successfully. Nov 24 00:21:22.287130 systemd[1]: session-27.scope: Deactivated successfully. Nov 24 00:21:22.289756 systemd-logind[1680]: Removed session 27. Nov 24 00:21:25.412709 kubelet[3165]: E1124 00:21:25.412662 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:21:27.387407 systemd[1]: Started sshd@25-10.200.4.12:22-10.200.16.10:41586.service - OpenSSH per-connection server daemon (10.200.16.10:41586). Nov 24 00:21:27.415313 kubelet[3165]: E1124 00:21:27.413894 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:21:27.998859 sshd[5611]: Accepted publickey for core from 10.200.16.10 port 41586 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:27.999973 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:28.004296 systemd-logind[1680]: New session 28 of user core. Nov 24 00:21:28.007324 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 24 00:21:28.412408 kubelet[3165]: E1124 00:21:28.411890 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b58cdd7d9-2thpc" podUID="8d3a110a-eb9c-4905-82ad-09bfe36d2064" Nov 24 00:21:28.412977 kubelet[3165]: E1124 00:21:28.412849 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:21:28.474135 sshd[5636]: Connection closed by 10.200.16.10 port 41586 Nov 24 00:21:28.476342 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:28.480841 systemd-logind[1680]: Session 28 logged out. Waiting for processes to exit. Nov 24 00:21:28.481684 systemd[1]: sshd@25-10.200.4.12:22-10.200.16.10:41586.service: Deactivated successfully. Nov 24 00:21:28.485021 systemd[1]: session-28.scope: Deactivated successfully. Nov 24 00:21:28.487846 systemd-logind[1680]: Removed session 28. Nov 24 00:21:29.411665 kubelet[3165]: E1124 00:21:29.411611 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db" Nov 24 00:21:33.582410 systemd[1]: Started sshd@26-10.200.4.12:22-10.200.16.10:56822.service - OpenSSH per-connection server daemon (10.200.16.10:56822). Nov 24 00:21:34.186529 sshd[5648]: Accepted publickey for core from 10.200.16.10 port 56822 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:34.187650 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:34.192193 systemd-logind[1680]: New session 29 of user core. Nov 24 00:21:34.197315 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 24 00:21:34.412114 kubelet[3165]: E1124 00:21:34.411712 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-g5jrw" podUID="59c9a609-1992-4463-b755-389571dcaa93" Nov 24 00:21:34.661348 sshd[5651]: Connection closed by 10.200.16.10 port 56822 Nov 24 00:21:34.661891 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:34.665306 systemd[1]: sshd@26-10.200.4.12:22-10.200.16.10:56822.service: Deactivated successfully. Nov 24 00:21:34.667277 systemd[1]: session-29.scope: Deactivated successfully. Nov 24 00:21:34.668000 systemd-logind[1680]: Session 29 logged out. Waiting for processes to exit. Nov 24 00:21:34.669154 systemd-logind[1680]: Removed session 29. Nov 24 00:21:37.411939 kubelet[3165]: E1124 00:21:37.411889 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d88c99f6b-q6djh" podUID="d05efb04-97c4-4681-b343-8c87d932c961" Nov 24 00:21:39.412149 kubelet[3165]: E1124 00:21:39.412108 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c79b749df-5s9h6" podUID="55ebf745-9192-40af-99eb-e78240db2491" Nov 24 00:21:39.769388 systemd[1]: Started sshd@27-10.200.4.12:22-10.200.16.10:56838.service - OpenSSH per-connection server daemon (10.200.16.10:56838). Nov 24 00:21:40.382273 sshd[5669]: Accepted publickey for core from 10.200.16.10 port 56838 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:21:40.383435 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:21:40.387641 systemd-logind[1680]: New session 30 of user core. Nov 24 00:21:40.393331 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 24 00:21:40.414825 kubelet[3165]: E1124 00:21:40.414562 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z6pwc" podUID="377ffa75-e56f-4a86-9355-a323312d6a89" Nov 24 00:21:40.859595 sshd[5672]: Connection closed by 10.200.16.10 port 56838 Nov 24 00:21:40.860353 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Nov 24 00:21:40.864203 systemd[1]: sshd@27-10.200.4.12:22-10.200.16.10:56838.service: Deactivated successfully. Nov 24 00:21:40.866087 systemd[1]: session-30.scope: Deactivated successfully. Nov 24 00:21:40.866821 systemd-logind[1680]: Session 30 logged out. Waiting for processes to exit. Nov 24 00:21:40.867931 systemd-logind[1680]: Removed session 30. Nov 24 00:21:42.414876 kubelet[3165]: E1124 00:21:42.414675 3165 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wzcms" podUID="45d4961d-4fb6-4f95-8d11-3d57944631db"