Nov 24 00:15:20.014611 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:15:20.014640 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:15:20.014654 kernel: BIOS-provided physical RAM map: Nov 24 00:15:20.014661 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:15:20.014667 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 24 00:15:20.014673 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 24 00:15:20.014681 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 24 00:15:20.014688 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 24 00:15:20.014695 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 24 00:15:20.014703 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 24 00:15:20.014709 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 24 00:15:20.014719 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 24 00:15:20.014729 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 24 00:15:20.014735 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 24 00:15:20.014743 kernel: NX (Execute Disable) protection: active Nov 24 00:15:20.014752 kernel: APIC: Static calls initialized Nov 24 00:15:20.014759 kernel: efi: EFI v2.7 by Microsoft Nov 24 00:15:20.014767 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Nov 24 00:15:20.014774 kernel: random: crng init done Nov 24 00:15:20.014782 kernel: secureboot: Secure boot disabled Nov 24 00:15:20.014788 kernel: SMBIOS 3.1.0 present. Nov 24 00:15:20.014795 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Nov 24 00:15:20.014802 kernel: DMI: Memory slots populated: 2/2 Nov 24 00:15:20.014808 kernel: Hypervisor detected: Microsoft Hyper-V Nov 24 00:15:20.014816 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 24 00:15:20.014823 kernel: Hyper-V: Nested features: 0x3e0101 Nov 24 00:15:20.014832 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 24 00:15:20.014839 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 24 00:15:20.014846 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:15:20.014854 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 24 00:15:20.014862 kernel: tsc: Detected 2300.000 MHz processor Nov 24 00:15:20.014869 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:15:20.014879 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:15:20.014906 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 24 00:15:20.014915 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:15:20.014923 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:15:20.014934 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 24 00:15:20.014942 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 24 00:15:20.014950 kernel: Using GB pages for direct mapping Nov 24 00:15:20.014959 kernel: ACPI: Early table checksum verification disabled Nov 24 00:15:20.014970 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 24 00:15:20.014979 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.014990 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.014998 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 24 00:15:20.015006 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 24 00:15:20.015015 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.015023 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.015032 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.015040 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:15:20.015051 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 24 00:15:20.015059 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 24 00:15:20.015068 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 24 00:15:20.015076 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Nov 24 00:15:20.015085 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 24 00:15:20.015093 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 24 00:15:20.015102 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 24 00:15:20.015110 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 24 00:15:20.015119 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 24 00:15:20.015129 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 24 00:15:20.015137 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 24 00:15:20.015145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 24 00:15:20.015154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 24 00:15:20.015162 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 24 00:15:20.015171 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 24 00:15:20.015180 kernel: Zone ranges: Nov 24 00:15:20.015188 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:15:20.015196 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 24 00:15:20.015206 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:15:20.015214 kernel: Device empty Nov 24 00:15:20.015222 kernel: Movable zone start for each node Nov 24 00:15:20.015230 kernel: Early memory node ranges Nov 24 00:15:20.015238 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:15:20.015246 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 24 00:15:20.015254 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 24 00:15:20.015262 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 24 00:15:20.015270 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 24 00:15:20.015280 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 24 00:15:20.015288 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:15:20.015296 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:15:20.015304 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 24 00:15:20.015312 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 24 00:15:20.015321 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 24 00:15:20.015329 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 24 00:15:20.015337 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:15:20.015345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:15:20.015355 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:15:20.015363 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 24 00:15:20.015371 kernel: TSC deadline timer available Nov 24 00:15:20.015379 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:15:20.015387 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:15:20.015395 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:15:20.015403 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:15:20.015412 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:15:20.015420 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:15:20.015428 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:15:20.015438 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 24 00:15:20.015447 kernel: Booting paravirtualized kernel on Hyper-V Nov 24 00:15:20.015455 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:15:20.015464 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:15:20.015472 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:15:20.015480 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:15:20.015488 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:15:20.015496 kernel: Hyper-V: PV spinlocks enabled Nov 24 00:15:20.015507 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:15:20.015517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:15:20.015525 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 24 00:15:20.015534 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:15:20.015542 kernel: Fallback order for Node 0: 0 Nov 24 00:15:20.015551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 24 00:15:20.015559 kernel: Policy zone: Normal Nov 24 00:15:20.015567 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:15:20.015575 kernel: software IO TLB: area num 2. Nov 24 00:15:20.015585 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:15:20.015593 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:15:20.015602 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:15:20.015610 kernel: Dynamic Preempt: voluntary Nov 24 00:15:20.015619 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:15:20.015628 kernel: rcu: RCU event tracing is enabled. Nov 24 00:15:20.015643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:15:20.015654 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:15:20.015663 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:15:20.015672 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:15:20.015681 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:15:20.015692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:15:20.015701 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:15:20.015710 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:15:20.015720 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:15:20.015729 kernel: Using NULL legacy PIC Nov 24 00:15:20.015739 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 24 00:15:20.015748 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:15:20.015757 kernel: Console: colour dummy device 80x25 Nov 24 00:15:20.015766 kernel: printk: legacy console [tty1] enabled Nov 24 00:15:20.015775 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:15:20.015784 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 24 00:15:20.015793 kernel: ACPI: Core revision 20240827 Nov 24 00:15:20.015804 kernel: Failed to register legacy timer interrupt Nov 24 00:15:20.015813 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:15:20.015824 kernel: x2apic enabled Nov 24 00:15:20.015833 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:15:20.015842 kernel: Hyper-V: Host Build 10.0.26100.1421-1-0 Nov 24 00:15:20.015851 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 24 00:15:20.015860 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 24 00:15:20.015869 kernel: Hyper-V: Using IPI hypercalls Nov 24 00:15:20.015878 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 24 00:15:20.021941 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 24 00:15:20.021964 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 24 00:15:20.021979 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 24 00:15:20.021990 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 24 00:15:20.022000 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 24 00:15:20.022009 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Nov 24 00:15:20.022018 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Nov 24 00:15:20.022028 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:15:20.022038 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 00:15:20.022047 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 00:15:20.022057 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:15:20.022068 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:15:20.022077 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:15:20.022087 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:15:20.022096 kernel: RETBleed: Vulnerable Nov 24 00:15:20.022105 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:15:20.022114 kernel: active return thunk: its_return_thunk Nov 24 00:15:20.022123 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:15:20.022132 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:15:20.022141 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:15:20.022150 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:15:20.022160 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:15:20.022171 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:15:20.022180 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:15:20.022189 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 24 00:15:20.022198 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 24 00:15:20.022207 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 24 00:15:20.022216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:15:20.022226 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 24 00:15:20.022235 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 24 00:15:20.022244 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 24 00:15:20.022253 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 24 00:15:20.022262 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 24 00:15:20.022273 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 24 00:15:20.022282 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 24 00:15:20.022291 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:15:20.022300 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:15:20.022310 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:15:20.022319 kernel: landlock: Up and running. Nov 24 00:15:20.022328 kernel: SELinux: Initializing. Nov 24 00:15:20.022337 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:15:20.022346 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:15:20.022355 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 24 00:15:20.022364 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 24 00:15:20.022374 kernel: signal: max sigframe size: 11952 Nov 24 00:15:20.022385 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:15:20.022395 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:15:20.022404 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:15:20.022414 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:15:20.022423 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:15:20.022432 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:15:20.022441 kernel: .... node #0, CPUs: #1 Nov 24 00:15:20.022451 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:15:20.022460 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Nov 24 00:15:20.022472 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Nov 24 00:15:20.022482 kernel: devtmpfs: initialized Nov 24 00:15:20.022491 kernel: x86/mm: Memory block size: 128MB Nov 24 00:15:20.022501 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 24 00:15:20.022511 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:15:20.022521 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:15:20.022530 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:15:20.022539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:15:20.022548 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:15:20.022560 kernel: audit: type=2000 audit(1763943316.082:1): state=initialized audit_enabled=0 res=1 Nov 24 00:15:20.022569 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:15:20.022578 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:15:20.022587 kernel: cpuidle: using governor menu Nov 24 00:15:20.022597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:15:20.022606 kernel: dca service started, version 1.12.1 Nov 24 00:15:20.022615 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 24 00:15:20.022625 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 24 00:15:20.022636 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:15:20.022645 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:15:20.022654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:15:20.022663 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:15:20.022673 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:15:20.022682 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:15:20.022691 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:15:20.022700 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:15:20.022710 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:15:20.022721 kernel: ACPI: Interpreter enabled Nov 24 00:15:20.022730 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:15:20.022739 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:15:20.022749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:15:20.022758 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 24 00:15:20.022768 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 24 00:15:20.022777 kernel: iommu: Default domain type: Translated Nov 24 00:15:20.022786 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:15:20.022795 kernel: efivars: Registered efivars operations Nov 24 00:15:20.022806 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:15:20.022816 kernel: PCI: System does not support PCI Nov 24 00:15:20.022825 kernel: vgaarb: loaded Nov 24 00:15:20.022834 kernel: clocksource: Switched to clocksource tsc-early Nov 24 00:15:20.022843 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:15:20.022853 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:15:20.022862 kernel: pnp: PnP ACPI init Nov 24 00:15:20.022871 kernel: pnp: PnP ACPI: found 3 devices Nov 24 00:15:20.022880 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:15:20.022902 kernel: NET: Registered PF_INET protocol family Nov 24 00:15:20.022912 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:15:20.022924 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 24 00:15:20.022936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:15:20.022945 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:15:20.022954 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 24 00:15:20.022964 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 24 00:15:20.022972 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:15:20.022980 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 24 00:15:20.022991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:15:20.023000 kernel: NET: Registered PF_XDP protocol family Nov 24 00:15:20.023008 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:15:20.023017 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 00:15:20.023029 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Nov 24 00:15:20.023038 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 24 00:15:20.023046 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 24 00:15:20.023054 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Nov 24 00:15:20.023061 kernel: clocksource: Switched to clocksource tsc Nov 24 00:15:20.023070 kernel: Initialise system trusted keyrings Nov 24 00:15:20.023078 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 24 00:15:20.023087 kernel: Key type asymmetric registered Nov 24 00:15:20.023096 kernel: Asymmetric key parser 'x509' registered Nov 24 00:15:20.023105 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:15:20.023115 kernel: io scheduler mq-deadline registered Nov 24 00:15:20.023124 kernel: io scheduler kyber registered Nov 24 00:15:20.023133 kernel: io scheduler bfq registered Nov 24 00:15:20.023143 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:15:20.023154 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:15:20.023164 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:15:20.023173 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 24 00:15:20.023182 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:15:20.023192 kernel: i8042: PNP: No PS/2 controller found. Nov 24 00:15:20.023336 kernel: rtc_cmos 00:02: registered as rtc0 Nov 24 00:15:20.023419 kernel: rtc_cmos 00:02: setting system clock to 2025-11-24T00:15:19 UTC (1763943319) Nov 24 00:15:20.023494 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 24 00:15:20.023508 kernel: intel_pstate: Intel P-state driver initializing Nov 24 00:15:20.023519 kernel: efifb: probing for efifb Nov 24 00:15:20.023529 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 24 00:15:20.023539 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 24 00:15:20.023548 kernel: efifb: scrolling: redraw Nov 24 00:15:20.023558 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:15:20.023568 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:15:20.023577 kernel: fb0: EFI VGA frame buffer device Nov 24 00:15:20.023587 kernel: pstore: Using crash dump compression: deflate Nov 24 00:15:20.023599 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:15:20.023608 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:15:20.023618 kernel: Segment Routing with IPv6 Nov 24 00:15:20.023628 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:15:20.023638 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:15:20.023647 kernel: Key type dns_resolver registered Nov 24 00:15:20.023657 kernel: IPI shorthand broadcast: enabled Nov 24 00:15:20.023666 kernel: sched_clock: Marking stable (3067004508, 95652491)->(3548487940, -385830941) Nov 24 00:15:20.023675 kernel: registered taskstats version 1 Nov 24 00:15:20.023687 kernel: Loading compiled-in X.509 certificates Nov 24 00:15:20.023697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:15:20.023706 kernel: Demotion targets for Node 0: null Nov 24 00:15:20.023716 kernel: Key type .fscrypt registered Nov 24 00:15:20.023725 kernel: Key type fscrypt-provisioning registered Nov 24 00:15:20.023734 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:15:20.023744 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:15:20.023753 kernel: ima: No architecture policies found Nov 24 00:15:20.023763 kernel: clk: Disabling unused clocks Nov 24 00:15:20.023774 kernel: Warning: unable to open an initial console. Nov 24 00:15:20.023824 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:15:20.023835 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:15:20.023844 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:15:20.023853 kernel: Run /init as init process Nov 24 00:15:20.023862 kernel: with arguments: Nov 24 00:15:20.023871 kernel: /init Nov 24 00:15:20.023880 kernel: with environment: Nov 24 00:15:20.028766 kernel: HOME=/ Nov 24 00:15:20.028782 kernel: TERM=linux Nov 24 00:15:20.028794 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:15:20.028811 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:15:20.028821 systemd[1]: Detected virtualization microsoft. Nov 24 00:15:20.028831 systemd[1]: Detected architecture x86-64. Nov 24 00:15:20.028839 systemd[1]: Running in initrd. Nov 24 00:15:20.028848 systemd[1]: No hostname configured, using default hostname. Nov 24 00:15:20.028861 systemd[1]: Hostname set to . Nov 24 00:15:20.028871 systemd[1]: Initializing machine ID from random generator. Nov 24 00:15:20.028881 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:15:20.028911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:15:20.028922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:15:20.028933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:15:20.028943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:15:20.028953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:15:20.028966 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:15:20.028978 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:15:20.028988 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:15:20.028998 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:15:20.029008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:15:20.029019 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:15:20.029029 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:15:20.029041 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:15:20.029051 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:15:20.029061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:15:20.029070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:15:20.029080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:15:20.029091 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:15:20.029100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:15:20.029110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:15:20.029119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:15:20.029131 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:15:20.029140 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:15:20.029149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:15:20.029159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:15:20.029169 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:15:20.029178 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:15:20.029187 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:15:20.029198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:15:20.029219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:20.029231 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:15:20.029263 systemd-journald[185]: Collecting audit messages is disabled. Nov 24 00:15:20.029290 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:15:20.029300 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:15:20.029310 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:15:20.029322 systemd-journald[185]: Journal started Nov 24 00:15:20.029347 systemd-journald[185]: Runtime Journal (/run/log/journal/27c8e882802a4bc1aac5ea70b711226a) is 8M, max 158.6M, 150.6M free. Nov 24 00:15:20.014751 systemd-modules-load[187]: Inserted module 'overlay' Nov 24 00:15:20.033408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:15:20.038125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:15:20.043499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:20.053249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:15:20.060022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:15:20.063582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:15:20.075967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:15:20.076472 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:15:20.082008 kernel: Bridge firewalling registered Nov 24 00:15:20.083984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:15:20.086555 systemd-modules-load[187]: Inserted module 'br_netfilter' Nov 24 00:15:20.091759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:15:20.092487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:15:20.096006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:15:20.107068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:15:20.111255 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:15:20.119965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:15:20.126007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:15:20.135128 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:15:20.174965 systemd-resolved[230]: Positive Trust Anchors: Nov 24 00:15:20.174983 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:15:20.175019 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:15:20.195361 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 24 00:15:20.198300 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:15:20.200923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:15:20.220906 kernel: SCSI subsystem initialized Nov 24 00:15:20.228901 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:15:20.238909 kernel: iscsi: registered transport (tcp) Nov 24 00:15:20.256996 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:15:20.257042 kernel: QLogic iSCSI HBA Driver Nov 24 00:15:20.271409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:15:20.289660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:15:20.291304 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:15:20.328416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:15:20.332550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:15:20.382912 kernel: raid6: avx512x4 gen() 40803 MB/s Nov 24 00:15:20.400899 kernel: raid6: avx512x2 gen() 41358 MB/s Nov 24 00:15:20.418901 kernel: raid6: avx512x1 gen() 24922 MB/s Nov 24 00:15:20.436900 kernel: raid6: avx2x4 gen() 35345 MB/s Nov 24 00:15:20.453897 kernel: raid6: avx2x2 gen() 37062 MB/s Nov 24 00:15:20.472063 kernel: raid6: avx2x1 gen() 30401 MB/s Nov 24 00:15:20.472078 kernel: raid6: using algorithm avx512x2 gen() 41358 MB/s Nov 24 00:15:20.491902 kernel: raid6: .... xor() 27993 MB/s, rmw enabled Nov 24 00:15:20.491923 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:15:20.510907 kernel: xor: automatically using best checksumming function avx Nov 24 00:15:20.635915 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:15:20.641638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:15:20.646165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:15:20.669921 systemd-udevd[436]: Using default interface naming scheme 'v255'. Nov 24 00:15:20.674147 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:15:20.681418 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:15:20.701711 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Nov 24 00:15:20.721360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:15:20.724010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:15:20.785714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:15:20.794010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:15:20.834916 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:15:20.844909 kernel: AES CTR mode by8 optimization enabled Nov 24 00:15:20.869928 kernel: hv_vmbus: Vmbus version:5.3 Nov 24 00:15:20.896759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:15:20.907992 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 24 00:15:20.908016 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 24 00:15:20.908029 kernel: hv_vmbus: registering driver hv_storvsc Nov 24 00:15:20.896904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:20.905326 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:20.915101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:20.919646 kernel: scsi host0: storvsc_host_t Nov 24 00:15:20.919844 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 24 00:15:20.928650 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 24 00:15:20.934362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:15:20.934557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:20.946051 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 24 00:15:20.950997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:20.958652 kernel: hv_vmbus: registering driver hv_netvsc Nov 24 00:15:20.958672 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 24 00:15:20.958682 kernel: hv_vmbus: registering driver hv_pci Nov 24 00:15:20.958691 kernel: PTP clock support registered Nov 24 00:15:20.965211 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 24 00:15:20.975738 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 24 00:15:20.977073 kernel: hv_utils: Registering HyperV Utility Driver Nov 24 00:15:20.977093 kernel: hv_vmbus: registering driver hv_utils Nov 24 00:15:20.981615 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 24 00:15:20.981809 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:15:20.984901 kernel: hv_vmbus: registering driver hid_hyperv Nov 24 00:15:20.996225 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 24 00:15:20.996363 kernel: hv_utils: Shutdown IC version 3.2 Nov 24 00:15:20.996377 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd139ab3 (unnamed net_device) (uninitialized): VF slot 1 added Nov 24 00:15:20.996584 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 24 00:15:20.998201 kernel: hv_utils: Heartbeat IC version 3.0 Nov 24 00:15:21.006131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:21.025963 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 24 00:15:21.026106 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 00:15:21.026116 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 24 00:15:21.026127 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 24 00:15:21.026204 kernel: hv_utils: TimeSync IC version 4.0 Nov 24 00:15:21.194826 systemd-resolved[230]: Clock change detected. Flushing caches. Nov 24 00:15:21.198988 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:15:21.199144 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 24 00:15:21.203915 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 24 00:15:21.220966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:15:21.221119 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 24 00:15:21.226211 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 24 00:15:21.244967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:15:21.381090 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:15:21.386921 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:15:21.688923 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:15:21.862523 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 24 00:15:21.886827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:15:21.925246 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 24 00:15:21.934812 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:15:21.934953 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 24 00:15:21.935380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:15:21.936335 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:15:21.936860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:15:21.937498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:15:21.957565 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:15:21.972331 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:15:21.982926 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:15:22.001412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:15:22.006913 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:15:22.196786 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 24 00:15:22.197029 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 24 00:15:22.199778 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 24 00:15:22.201528 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 24 00:15:22.207004 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 24 00:15:22.210954 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 24 00:15:22.216106 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 24 00:15:22.216131 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 24 00:15:22.232479 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 24 00:15:22.232672 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 24 00:15:22.237056 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 24 00:15:22.241863 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 24 00:15:22.252924 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 24 00:15:22.255965 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd139ab3 eth0: VF registering: eth1 Nov 24 00:15:22.256133 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 24 00:15:22.260920 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 24 00:15:23.012725 disk-uuid[656]: The operation has completed successfully. Nov 24 00:15:23.015236 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:15:23.095679 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:15:23.095785 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:15:23.128295 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:15:23.145677 sh[697]: Success Nov 24 00:15:23.177179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:15:23.177253 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:15:23.178985 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:15:23.187925 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:15:23.425411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:15:23.429820 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:15:23.438414 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:15:23.450666 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (710) Nov 24 00:15:23.450717 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:15:23.451935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:15:23.717378 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:15:23.717478 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:15:23.717907 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:15:23.761795 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:15:23.765400 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:15:23.767136 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:15:23.767914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:15:23.771013 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:15:23.799934 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (743) Nov 24 00:15:23.803699 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:15:23.803742 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:15:23.824928 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:15:23.824982 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:15:23.824995 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:15:23.832006 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:15:23.832809 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:15:23.839952 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:15:23.866336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:15:23.870934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:15:23.904639 systemd-networkd[879]: lo: Link UP Nov 24 00:15:23.904647 systemd-networkd[879]: lo: Gained carrier Nov 24 00:15:23.906012 systemd-networkd[879]: Enumeration completed Nov 24 00:15:23.917191 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:15:23.918546 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:15:23.918629 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd139ab3 eth0: Data path switched to VF: enP30832s1 Nov 24 00:15:23.906407 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:15:23.906410 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:15:23.906986 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:15:23.912528 systemd[1]: Reached target network.target - Network. Nov 24 00:15:23.915480 systemd-networkd[879]: enP30832s1: Link UP Nov 24 00:15:23.915658 systemd-networkd[879]: eth0: Link UP Nov 24 00:15:23.916119 systemd-networkd[879]: eth0: Gained carrier Nov 24 00:15:23.916131 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:15:23.922270 systemd-networkd[879]: enP30832s1: Gained carrier Nov 24 00:15:23.930944 systemd-networkd[879]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:15:25.030844 ignition[834]: Ignition 2.22.0 Nov 24 00:15:25.030858 ignition[834]: Stage: fetch-offline Nov 24 00:15:25.033495 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:15:25.030981 ignition[834]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:25.038819 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:15:25.030988 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:25.031095 ignition[834]: parsed url from cmdline: "" Nov 24 00:15:25.031098 ignition[834]: no config URL provided Nov 24 00:15:25.031102 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:15:25.031108 ignition[834]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:15:25.031113 ignition[834]: failed to fetch config: resource requires networking Nov 24 00:15:25.032121 ignition[834]: Ignition finished successfully Nov 24 00:15:25.064419 ignition[888]: Ignition 2.22.0 Nov 24 00:15:25.064428 ignition[888]: Stage: fetch Nov 24 00:15:25.064655 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:25.064663 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:25.065609 ignition[888]: parsed url from cmdline: "" Nov 24 00:15:25.065614 ignition[888]: no config URL provided Nov 24 00:15:25.065620 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:15:25.065625 ignition[888]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:15:25.065643 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 24 00:15:25.173702 ignition[888]: GET result: OK Nov 24 00:15:25.173873 ignition[888]: config has been read from IMDS userdata Nov 24 00:15:25.178236 unknown[888]: fetched base config from "system" Nov 24 00:15:25.173924 ignition[888]: parsing config with SHA512: 54a780b9032eb15257e6ce2599ebf8b2f85892d8f6535687f4f2d7424d86fa619af9c07045c1f6f6a654d4c1b8069aaa66e65452f8ce5a250231f0436897b4d5 Nov 24 00:15:25.178242 unknown[888]: fetched base config from "system" Nov 24 00:15:25.178695 ignition[888]: fetch: fetch complete Nov 24 00:15:25.178245 unknown[888]: fetched user config from "azure" Nov 24 00:15:25.178700 ignition[888]: fetch: fetch passed Nov 24 00:15:25.181007 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:15:25.178748 ignition[888]: Ignition finished successfully Nov 24 00:15:25.185077 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:15:25.220279 ignition[894]: Ignition 2.22.0 Nov 24 00:15:25.220288 ignition[894]: Stage: kargs Nov 24 00:15:25.223143 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:15:25.220530 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:25.227326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:15:25.220538 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:25.221424 ignition[894]: kargs: kargs passed Nov 24 00:15:25.221460 ignition[894]: Ignition finished successfully Nov 24 00:15:25.254565 ignition[901]: Ignition 2.22.0 Nov 24 00:15:25.254576 ignition[901]: Stage: disks Nov 24 00:15:25.254785 ignition[901]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:25.254793 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:25.257680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:15:25.255344 ignition[901]: disks: disks passed Nov 24 00:15:25.263071 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:15:25.255370 ignition[901]: Ignition finished successfully Nov 24 00:15:25.266075 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:15:25.268711 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:15:25.270153 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:15:25.271991 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:15:25.272822 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:15:25.334477 systemd-fsck[910]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 24 00:15:25.340712 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:15:25.346761 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:15:25.349979 systemd-networkd[879]: eth0: Gained IPv6LL Nov 24 00:15:25.584923 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:15:25.586038 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:15:25.587884 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:15:25.603240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:15:25.608985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:15:25.616029 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 24 00:15:25.619016 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:15:25.619048 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:15:25.623459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:15:25.626015 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:15:25.641915 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (919) Nov 24 00:15:25.644879 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:15:25.644938 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:15:25.649507 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:15:25.649554 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:15:25.650950 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:15:25.652079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:15:26.315035 coreos-metadata[921]: Nov 24 00:15:26.314 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:15:26.324472 coreos-metadata[921]: Nov 24 00:15:26.324 INFO Fetch successful Nov 24 00:15:26.326167 coreos-metadata[921]: Nov 24 00:15:26.326 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:15:26.337425 coreos-metadata[921]: Nov 24 00:15:26.337 INFO Fetch successful Nov 24 00:15:26.351309 coreos-metadata[921]: Nov 24 00:15:26.351 INFO wrote hostname ci-4459.2.1-a-980c694365 to /sysroot/etc/hostname Nov 24 00:15:26.353473 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:15:26.367094 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:15:26.411518 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:15:26.431070 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:15:26.435726 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:15:27.380652 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:15:27.383744 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:15:27.407047 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:15:27.414101 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:15:27.418045 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:15:27.445173 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:15:27.448432 ignition[1038]: INFO : Ignition 2.22.0 Nov 24 00:15:27.450223 ignition[1038]: INFO : Stage: mount Nov 24 00:15:27.450223 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:27.450223 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:27.454128 ignition[1038]: INFO : mount: mount passed Nov 24 00:15:27.454128 ignition[1038]: INFO : Ignition finished successfully Nov 24 00:15:27.458379 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:15:27.462566 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:15:27.484638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:15:27.503916 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1049) Nov 24 00:15:27.505907 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:15:27.505987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:15:27.512155 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:15:27.512196 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 24 00:15:27.513651 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:15:27.515774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:15:27.547815 ignition[1066]: INFO : Ignition 2.22.0 Nov 24 00:15:27.547815 ignition[1066]: INFO : Stage: files Nov 24 00:15:27.552975 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:27.552975 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:27.552975 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:15:27.563304 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:15:27.563304 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:15:27.593468 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:15:27.599008 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:15:27.599008 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:15:27.593936 unknown[1066]: wrote ssh authorized keys file for user: core Nov 24 00:15:27.609172 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:15:27.611749 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:15:27.647240 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:15:27.719489 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:15:27.724034 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:15:27.752935 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:15:27.982691 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:15:28.161222 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:15:28.161222 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:15:28.192404 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:15:28.199752 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:15:28.199752 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:15:28.199752 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:15:28.209947 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:15:28.209947 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:15:28.209947 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:15:28.209947 ignition[1066]: INFO : files: files passed Nov 24 00:15:28.209947 ignition[1066]: INFO : Ignition finished successfully Nov 24 00:15:28.207954 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:15:28.216411 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:15:28.227019 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:15:28.231471 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:15:28.231562 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:15:28.253857 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:15:28.253857 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:15:28.258202 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:15:28.261617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:15:28.261870 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:15:28.262701 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:15:28.308662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:15:28.308763 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:15:28.312469 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:15:28.312528 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:15:28.312835 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:15:28.315011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:15:28.339591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:15:28.343812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:15:28.361638 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:15:28.364854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:15:28.367777 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:15:28.371046 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:15:28.371161 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:15:28.374261 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:15:28.378050 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:15:28.381042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:15:28.385090 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:15:28.386609 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:15:28.393118 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:15:28.394970 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:15:28.399019 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:15:28.402066 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:15:28.405565 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:15:28.410050 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:15:28.411569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:15:28.411711 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:15:28.422994 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:15:28.426076 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:15:28.428944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:15:28.430193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:15:28.434709 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:15:28.436129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:15:28.440738 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:15:28.440887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:15:28.442383 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:15:28.442484 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:15:28.442724 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 24 00:15:28.442809 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 00:15:28.444988 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:15:28.452167 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:15:28.452299 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:15:28.470094 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:15:28.482301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:15:28.483971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:15:28.484576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:15:28.484691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:15:28.489455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:15:28.490004 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:15:28.499183 ignition[1120]: INFO : Ignition 2.22.0 Nov 24 00:15:28.499600 ignition[1120]: INFO : Stage: umount Nov 24 00:15:28.501119 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:15:28.501119 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 24 00:15:28.501119 ignition[1120]: INFO : umount: umount passed Nov 24 00:15:28.501119 ignition[1120]: INFO : Ignition finished successfully Nov 24 00:15:28.503605 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:15:28.504396 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:15:28.504878 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:15:28.505133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:15:28.505450 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:15:28.505481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:15:28.505747 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:15:28.505775 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:15:28.506003 systemd[1]: Stopped target network.target - Network. Nov 24 00:15:28.506030 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:15:28.506057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:15:28.506535 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:15:28.506556 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:15:28.512563 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:15:28.538956 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:15:28.541993 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:15:28.546884 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:15:28.546936 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:15:28.549060 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:15:28.549093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:15:28.550789 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:15:28.550863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:15:28.552252 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:15:28.552292 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:15:28.552656 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:15:28.552946 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:15:28.564045 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:15:28.564136 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:15:28.570592 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:15:28.570787 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:15:28.570932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:15:28.581193 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:15:28.581285 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:15:28.582277 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:15:28.584274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:15:28.584308 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:15:28.587728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:15:28.592955 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:15:28.593011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:15:28.596287 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:15:28.596328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:15:28.599128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:15:28.639039 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd139ab3 eth0: Data path switched from VF: enP30832s1 Nov 24 00:15:28.642437 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:15:28.599172 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:15:28.601920 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:15:28.602006 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:15:28.608997 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:15:28.618838 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:15:28.618973 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:15:28.629224 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:15:28.629366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:15:28.633748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:15:28.633823 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:15:28.639124 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:15:28.639159 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:15:28.640790 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:15:28.640888 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:15:28.646953 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:15:28.647005 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:15:28.651209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:15:28.651253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:15:28.656842 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:15:28.660362 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:15:28.660429 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:15:28.668182 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:15:28.668239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:15:28.674160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:15:28.674205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:28.691395 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:15:28.691449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:15:28.691488 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:15:28.691793 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:15:28.691880 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:15:28.692947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:15:28.693017 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:15:29.268891 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:15:29.269015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:15:29.273270 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:15:29.274425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:15:29.274484 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:15:29.275429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:15:29.290693 systemd[1]: Switching root. Nov 24 00:15:30.040663 systemd-journald[185]: Journal stopped Nov 24 00:15:37.094546 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 24 00:15:37.094580 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:15:37.094596 kernel: SELinux: policy capability open_perms=1 Nov 24 00:15:37.094605 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:15:37.094612 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:15:37.094620 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:15:37.094629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:15:37.094638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:15:37.094648 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:15:37.094655 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:15:37.094663 kernel: audit: type=1403 audit(1763943334.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:15:37.094672 systemd[1]: Successfully loaded SELinux policy in 136.568ms. Nov 24 00:15:37.094682 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.721ms. Nov 24 00:15:37.094693 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:15:37.094705 systemd[1]: Detected virtualization microsoft. Nov 24 00:15:37.094714 systemd[1]: Detected architecture x86-64. Nov 24 00:15:37.094724 systemd[1]: Detected first boot. Nov 24 00:15:37.094733 systemd[1]: Hostname set to . Nov 24 00:15:37.094744 systemd[1]: Initializing machine ID from random generator. Nov 24 00:15:37.094753 zram_generator::config[1164]: No configuration found. Nov 24 00:15:37.094765 kernel: Guest personality initialized and is inactive Nov 24 00:15:37.094773 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Nov 24 00:15:37.094782 kernel: Initialized host personality Nov 24 00:15:37.094790 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:15:37.094799 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:15:37.094810 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:15:37.094820 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:15:37.094829 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:15:37.094841 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:15:37.094851 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:15:37.094862 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:15:37.094872 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:15:37.094881 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:15:37.094891 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:15:37.094980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:15:37.094994 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:15:37.095004 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:15:37.095016 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:15:37.095027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:15:37.095038 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:15:37.095051 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:15:37.095062 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:15:37.095073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:15:37.095086 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:15:37.095096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:15:37.095106 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:15:37.095117 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:15:37.095208 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:15:37.095220 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:15:37.095231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:15:37.095244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:15:37.095255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:15:37.095265 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:15:37.095276 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:15:37.095287 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:15:37.095298 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:15:37.095311 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:15:37.095322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:15:37.095334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:15:37.095345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:15:37.095356 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:15:37.095366 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:15:37.095377 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:15:37.095390 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:15:37.095401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:37.095411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:15:37.095421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:15:37.095433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:15:37.095444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:15:37.095455 systemd[1]: Reached target machines.target - Containers. Nov 24 00:15:37.095466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:15:37.095477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:15:37.095489 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:15:37.095500 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:15:37.095510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:15:37.095521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:15:37.095532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:15:37.095542 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:15:37.095553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:15:37.095566 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:15:37.095580 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:15:37.095590 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:15:37.095600 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:15:37.095611 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:15:37.095623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:15:37.095634 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:15:37.095644 kernel: fuse: init (API version 7.41) Nov 24 00:15:37.095655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:15:37.095668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:15:37.095679 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:15:37.095690 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:15:37.095700 kernel: loop: module loaded Nov 24 00:15:37.095710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:15:37.095720 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:15:37.095730 systemd[1]: Stopped verity-setup.service. Nov 24 00:15:37.095741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:37.095752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:15:37.095793 systemd-journald[1257]: Collecting audit messages is disabled. Nov 24 00:15:37.095820 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:15:37.095832 systemd-journald[1257]: Journal started Nov 24 00:15:37.095860 systemd-journald[1257]: Runtime Journal (/run/log/journal/799ef31ab71f44a8bc949746a8cf3368) is 8M, max 158.6M, 150.6M free. Nov 24 00:15:36.668745 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:15:36.681475 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:15:36.681819 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:15:37.100922 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:15:37.104537 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:15:37.108118 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:15:37.109675 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:15:37.111192 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:15:37.112735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:15:37.114689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:15:37.119009 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:15:37.119173 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:15:37.122058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:15:37.122393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:15:37.124556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:15:37.124754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:15:37.126766 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:15:37.127006 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:15:37.129304 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:15:37.129473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:15:37.132173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:15:37.136381 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:15:37.141371 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:15:37.146390 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:15:37.162516 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:15:37.168992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:15:37.173999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:15:37.176986 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:15:37.177020 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:15:37.181923 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:15:37.188035 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:15:37.190611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:15:37.197107 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:15:37.201571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:15:37.204082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:15:37.208908 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:15:37.213352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:15:37.219319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:15:37.224024 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:15:37.228993 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:15:37.236017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:15:37.241987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:15:37.244391 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:15:37.248335 systemd-journald[1257]: Time spent on flushing to /var/log/journal/799ef31ab71f44a8bc949746a8cf3368 is 11.413ms for 988 entries. Nov 24 00:15:37.248335 systemd-journald[1257]: System Journal (/var/log/journal/799ef31ab71f44a8bc949746a8cf3368) is 8M, max 2.6G, 2.6G free. Nov 24 00:15:37.338581 systemd-journald[1257]: Received client request to flush runtime journal. Nov 24 00:15:37.338633 kernel: ACPI: bus type drm_connector registered Nov 24 00:15:37.338650 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 00:15:37.254668 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:15:37.255264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:15:37.261190 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:15:37.263311 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:15:37.268859 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:15:37.302205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:15:37.339825 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:15:39.075691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:15:39.077378 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:15:39.142183 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:15:39.147083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:15:39.339623 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 24 00:15:39.339639 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 24 00:15:39.343382 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:15:39.985198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:15:39.991110 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:15:39.992053 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:15:40.021677 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Nov 24 00:15:40.041945 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 00:15:40.637371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:15:40.644053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:15:40.682873 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:15:41.074924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 24 00:15:41.092933 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:15:41.133012 kernel: hv_vmbus: registering driver hv_balloon Nov 24 00:15:41.133481 kernel: hv_vmbus: registering driver hyperv_fb Nov 24 00:15:41.142360 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 24 00:15:41.142438 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 24 00:15:41.142165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:15:41.149285 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 24 00:15:41.154198 kernel: Console: switching to colour dummy device 80x25 Nov 24 00:15:41.159098 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 00:15:41.162016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:41.174506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:15:41.174853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:41.182060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:15:41.228532 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:15:41.736614 systemd-networkd[1336]: lo: Link UP Nov 24 00:15:41.736623 systemd-networkd[1336]: lo: Gained carrier Nov 24 00:15:41.738027 systemd-networkd[1336]: Enumeration completed Nov 24 00:15:41.741998 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 24 00:15:41.738140 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:15:41.738358 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:15:41.738362 systemd-networkd[1336]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:15:41.749541 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 24 00:15:41.749785 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd139ab3 eth0: Data path switched to VF: enP30832s1 Nov 24 00:15:41.744381 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:15:41.749297 systemd-networkd[1336]: enP30832s1: Link UP Nov 24 00:15:41.749374 systemd-networkd[1336]: eth0: Link UP Nov 24 00:15:41.749377 systemd-networkd[1336]: eth0: Gained carrier Nov 24 00:15:41.749397 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:15:41.750301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:15:41.754165 systemd-networkd[1336]: enP30832s1: Gained carrier Nov 24 00:15:41.761153 systemd-networkd[1336]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:15:41.840927 kernel: loop2: detected capacity change from 0 to 229808 Nov 24 00:15:41.937407 kernel: loop3: detected capacity change from 0 to 27936 Nov 24 00:15:41.940195 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:15:42.016923 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 24 00:15:42.533633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 24 00:15:42.538863 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:15:42.728796 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:15:43.397028 systemd-networkd[1336]: eth0: Gained IPv6LL Nov 24 00:15:43.399129 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:15:43.407296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:15:43.445121 kernel: loop4: detected capacity change from 0 to 110984 Nov 24 00:15:43.728928 kernel: loop5: detected capacity change from 0 to 128560 Nov 24 00:15:43.774919 kernel: loop6: detected capacity change from 0 to 229808 Nov 24 00:15:43.786919 kernel: loop7: detected capacity change from 0 to 27936 Nov 24 00:15:43.868268 (sd-merge)[1431]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 24 00:15:43.868710 (sd-merge)[1431]: Merged extensions into '/usr'. Nov 24 00:15:43.873033 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:15:43.873048 systemd[1]: Reloading... Nov 24 00:15:43.918031 zram_generator::config[1458]: No configuration found. Nov 24 00:15:44.220126 systemd[1]: Reloading finished in 346 ms. Nov 24 00:15:44.238859 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:15:44.249714 systemd[1]: Starting ensure-sysext.service... Nov 24 00:15:44.251788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:15:44.274623 systemd[1]: Reload requested from client PID 1517 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:15:44.274637 systemd[1]: Reloading... Nov 24 00:15:44.320028 zram_generator::config[1544]: No configuration found. Nov 24 00:15:44.333443 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:15:44.333475 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:15:44.333727 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:15:44.333980 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:15:44.334696 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:15:44.335077 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Nov 24 00:15:44.335139 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Nov 24 00:15:44.427458 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:15:44.427470 systemd-tmpfiles[1518]: Skipping /boot Nov 24 00:15:44.433770 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:15:44.433787 systemd-tmpfiles[1518]: Skipping /boot Nov 24 00:15:44.523313 systemd[1]: Reloading finished in 248 ms. Nov 24 00:15:44.541824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:15:44.549634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:15:44.555085 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:15:44.560030 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:15:44.572416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:15:44.579013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:15:44.586199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:44.586598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:15:44.588322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:15:44.593950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:15:44.597662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:15:44.600988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:15:44.601124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:15:44.601222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:44.615075 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:15:44.619725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:15:44.619885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:15:44.622524 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:15:44.622664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:15:44.625180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:15:44.625338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:15:44.635348 systemd[1]: Finished ensure-sysext.service. Nov 24 00:15:44.639236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:44.639428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:15:44.641520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:15:44.647085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:15:44.649999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:15:44.658520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:15:44.660575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:15:44.660613 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:15:44.660669 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:15:44.662803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:15:44.663262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:15:44.663950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:15:44.666228 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:15:44.669093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:15:44.670821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:15:44.670991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:15:44.674368 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:15:44.674697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:15:44.678456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:15:44.678539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:15:44.936720 systemd-resolved[1611]: Positive Trust Anchors: Nov 24 00:15:44.936734 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:15:44.936766 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:15:44.940283 systemd-resolved[1611]: Using system hostname 'ci-4459.2.1-a-980c694365'. Nov 24 00:15:44.941522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:15:44.945109 systemd[1]: Reached target network.target - Network. Nov 24 00:15:44.946991 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:15:44.949574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:15:45.039787 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:15:45.540857 augenrules[1650]: No rules Nov 24 00:15:45.542052 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:15:45.542274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:15:47.026010 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:15:47.027987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:15:52.404837 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:15:52.415423 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:15:52.418429 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:15:52.465209 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:15:52.467012 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:15:52.468682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:15:52.470331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:15:52.474024 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:15:52.475846 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:15:52.479031 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:15:52.481978 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:15:52.484973 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:15:52.485009 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:15:52.487955 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:15:52.536540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:15:52.539422 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:15:52.542696 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:15:52.545087 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:15:52.548024 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:15:52.561397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:15:52.562992 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:15:52.572206 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:15:52.575748 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:15:52.577297 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:15:52.578565 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:15:52.578594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:15:52.580674 systemd[1]: Starting chronyd.service - NTP client/server... Nov 24 00:15:52.584937 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:15:52.591397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:15:52.595074 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:15:52.601489 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:15:52.606885 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:15:52.616116 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:15:52.618334 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:15:52.621039 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:15:52.625198 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 24 00:15:52.628012 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 24 00:15:52.630370 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 24 00:15:52.632033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:15:52.636088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:15:52.644472 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:15:52.647826 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:15:52.653582 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:15:52.658665 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:15:52.669652 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:15:52.673993 jq[1667]: false Nov 24 00:15:52.677130 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:15:52.677577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:15:52.679015 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:15:52.683010 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:15:52.688484 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:15:52.691436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:15:52.691648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:15:52.695247 jq[1684]: true Nov 24 00:15:52.697238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:15:52.700098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:15:52.718448 jq[1689]: true Nov 24 00:15:52.743240 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing passwd entry cache Nov 24 00:15:52.742667 oslogin_cache_refresh[1672]: Refreshing passwd entry cache Nov 24 00:15:52.743681 KVP[1673]: KVP starting; pid is:1673 Nov 24 00:15:52.746008 (ntainerd)[1703]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:15:52.747539 chronyd[1662]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 24 00:15:52.753438 kernel: hv_utils: KVP IC version 4.0 Nov 24 00:15:52.751950 KVP[1673]: KVP LIC Version: 3.1 Nov 24 00:15:52.758454 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:15:52.758675 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:15:52.768828 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting users, quitting Nov 24 00:15:52.768828 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:15:52.768770 oslogin_cache_refresh[1672]: Failure getting users, quitting Nov 24 00:15:52.769000 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing group entry cache Nov 24 00:15:52.768789 oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:15:52.768835 oslogin_cache_refresh[1672]: Refreshing group entry cache Nov 24 00:15:52.780847 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting groups, quitting Nov 24 00:15:52.780847 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:15:52.780330 oslogin_cache_refresh[1672]: Failure getting groups, quitting Nov 24 00:15:52.780343 oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:15:52.782438 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:15:52.782678 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:15:52.983297 chronyd[1662]: Timezone right/UTC failed leap second check, ignoring Nov 24 00:15:52.983658 systemd[1]: Started chronyd.service - NTP client/server. Nov 24 00:15:52.988157 extend-filesystems[1668]: Found /dev/nvme0n1p6 Nov 24 00:15:52.983499 chronyd[1662]: Loaded seccomp filter (level 2) Nov 24 00:15:53.191214 systemd-logind[1682]: New seat seat0. Nov 24 00:15:53.367649 dbus-daemon[1665]: [system] SELinux support is enabled Nov 24 00:15:53.522632 update_engine[1683]: I20251124 00:15:53.198940 1683 main.cc:92] Flatcar Update Engine starting Nov 24 00:15:53.522632 update_engine[1683]: I20251124 00:15:53.370687 1683 update_check_scheduler.cc:74] Next update check in 7m17s Nov 24 00:15:53.522892 extend-filesystems[1668]: Found /dev/nvme0n1p9 Nov 24 00:15:53.522892 extend-filesystems[1668]: Checking size of /dev/nvme0n1p9 Nov 24 00:15:53.239269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:15:53.367837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:15:53.373615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:15:53.373646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:15:53.377456 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:15:53.377473 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:15:53.379799 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:15:53.384162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:15:53.527251 systemd-logind[1682]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:15:53.527675 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:15:53.532221 tar[1687]: linux-amd64/LICENSE Nov 24 00:15:53.532221 tar[1687]: linux-amd64/helm Nov 24 00:15:53.574238 extend-filesystems[1668]: Old size kept for /dev/nvme0n1p9 Nov 24 00:15:53.578712 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:15:53.578952 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:15:53.770139 bash[1724]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:15:53.772315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:15:53.777188 coreos-metadata[1664]: Nov 24 00:15:53.776 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 24 00:15:53.776951 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 00:15:53.782614 coreos-metadata[1664]: Nov 24 00:15:53.782 INFO Fetch successful Nov 24 00:15:53.782614 coreos-metadata[1664]: Nov 24 00:15:53.782 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 24 00:15:53.787219 coreos-metadata[1664]: Nov 24 00:15:53.786 INFO Fetch successful Nov 24 00:15:53.787219 coreos-metadata[1664]: Nov 24 00:15:53.787 INFO Fetching http://168.63.129.16/machine/37238851-c333-46e5-9812-9e88b2aac2d3/2e4e31b8%2Ddfe9%2D4f33%2Daf7a%2Deacf0e181a95.%5Fci%2D4459.2.1%2Da%2D980c694365?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 24 00:15:53.789476 coreos-metadata[1664]: Nov 24 00:15:53.789 INFO Fetch successful Nov 24 00:15:53.789681 coreos-metadata[1664]: Nov 24 00:15:53.789 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 24 00:15:53.805292 coreos-metadata[1664]: Nov 24 00:15:53.805 INFO Fetch successful Nov 24 00:15:53.947402 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:15:53.950033 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:15:54.010629 locksmithd[1746]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:15:54.072448 sshd_keygen[1699]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:15:54.099014 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:15:54.107981 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:15:54.119091 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 24 00:15:54.130871 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:15:54.131098 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:15:54.139553 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:15:54.165813 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 24 00:15:54.168429 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:15:54.176519 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:15:54.187259 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:15:54.190019 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:15:54.199673 tar[1687]: linux-amd64/README.md Nov 24 00:15:54.216319 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:15:54.460967 containerd[1703]: time="2025-11-24T00:15:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:15:54.463945 containerd[1703]: time="2025-11-24T00:15:54.463889731Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474230441Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.722µs" Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474265068Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474284190Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474414300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474426541Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474447450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474494294Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:15:54.474744 containerd[1703]: time="2025-11-24T00:15:54.474504305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475002 containerd[1703]: time="2025-11-24T00:15:54.474755534Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475002 containerd[1703]: time="2025-11-24T00:15:54.474766868Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475002 containerd[1703]: time="2025-11-24T00:15:54.474776724Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475002 containerd[1703]: time="2025-11-24T00:15:54.474785019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475002 containerd[1703]: time="2025-11-24T00:15:54.474837551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475104 containerd[1703]: time="2025-11-24T00:15:54.475032614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475104 containerd[1703]: time="2025-11-24T00:15:54.475059186Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:15:54.475104 containerd[1703]: time="2025-11-24T00:15:54.475069074Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:15:54.475104 containerd[1703]: time="2025-11-24T00:15:54.475090637Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:15:54.475319 containerd[1703]: time="2025-11-24T00:15:54.475305881Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:15:54.475378 containerd[1703]: time="2025-11-24T00:15:54.475358694Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:15:54.490581 containerd[1703]: time="2025-11-24T00:15:54.490533166Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:15:54.490739 containerd[1703]: time="2025-11-24T00:15:54.490693115Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:15:54.490739 containerd[1703]: time="2025-11-24T00:15:54.490714692Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:15:54.490907 containerd[1703]: time="2025-11-24T00:15:54.490727935Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:15:54.490907 containerd[1703]: time="2025-11-24T00:15:54.490837494Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:15:54.490907 containerd[1703]: time="2025-11-24T00:15:54.490849211Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:15:54.490907 containerd[1703]: time="2025-11-24T00:15:54.490866549Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:15:54.490907 containerd[1703]: time="2025-11-24T00:15:54.490884808Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491031609Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491045725Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491055975Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491068977Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491179618Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491196010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491215608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491228817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491241246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491251327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491262010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491272788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491283926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491295177Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:15:54.491403 containerd[1703]: time="2025-11-24T00:15:54.491320438Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:15:54.491622 containerd[1703]: time="2025-11-24T00:15:54.491366605Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:15:54.491622 containerd[1703]: time="2025-11-24T00:15:54.491380212Z" level=info msg="Start snapshots syncer" Nov 24 00:15:54.491680 containerd[1703]: time="2025-11-24T00:15:54.491673624Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:15:54.492008 containerd[1703]: time="2025-11-24T00:15:54.491974920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:15:54.492260 containerd[1703]: time="2025-11-24T00:15:54.492218917Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:15:54.492318 containerd[1703]: time="2025-11-24T00:15:54.492307109Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492461772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492484121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492495109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492505481Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492519926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492530851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492542326Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492566335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492578829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:15:54.492625 containerd[1703]: time="2025-11-24T00:15:54.492589562Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:15:54.492924 containerd[1703]: time="2025-11-24T00:15:54.492914289Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:15:54.493003 containerd[1703]: time="2025-11-24T00:15:54.492991669Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493030962Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493042624Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493051823Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493061549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493076625Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493088741Z" level=info msg="runtime interface created" Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493092535Z" level=info msg="created NRI interface" Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493098651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493108039Z" level=info msg="Connect containerd service" Nov 24 00:15:54.493143 containerd[1703]: time="2025-11-24T00:15:54.493126603Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:15:54.494274 containerd[1703]: time="2025-11-24T00:15:54.494196943Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:15:54.538028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:15:54.556883 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:15:54.974530 containerd[1703]: time="2025-11-24T00:15:54.974462710Z" level=info msg="Start subscribing containerd event" Nov 24 00:15:54.974845 containerd[1703]: time="2025-11-24T00:15:54.974702110Z" level=info msg="Start recovering state" Nov 24 00:15:54.974845 containerd[1703]: time="2025-11-24T00:15:54.974825952Z" level=info msg="Start event monitor" Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974938082Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974947975Z" level=info msg="Start streaming server" Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974961101Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974968943Z" level=info msg="runtime interface starting up..." Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974974967Z" level=info msg="starting plugins..." Nov 24 00:15:54.975044 containerd[1703]: time="2025-11-24T00:15:54.974988260Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:15:54.975164 containerd[1703]: time="2025-11-24T00:15:54.975109182Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:15:54.975164 containerd[1703]: time="2025-11-24T00:15:54.975153851Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:15:54.975448 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:15:54.979606 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:15:54.982701 systemd[1]: Startup finished in 3.195s (kernel) + 14.481s (initrd) + 20.619s (userspace) = 38.296s. Nov 24 00:15:54.985065 containerd[1703]: time="2025-11-24T00:15:54.983232888Z" level=info msg="containerd successfully booted in 0.522814s" Nov 24 00:15:55.137710 kubelet[1811]: E1124 00:15:55.137662 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:15:55.139752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:15:55.139888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:15:55.140294 systemd[1]: kubelet.service: Consumed 974ms CPU time, 267.5M memory peak. Nov 24 00:15:55.264065 login[1800]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 24 00:15:55.264412 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:15:55.269933 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:15:55.271124 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:15:55.281008 systemd-logind[1682]: New session 1 of user core. Nov 24 00:15:55.290183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:15:55.292427 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:15:55.302725 (systemd)[1836]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:15:55.304619 systemd-logind[1682]: New session c1 of user core. Nov 24 00:15:55.487609 systemd[1836]: Queued start job for default target default.target. Nov 24 00:15:55.494666 systemd[1836]: Created slice app.slice - User Application Slice. Nov 24 00:15:55.494695 systemd[1836]: Reached target paths.target - Paths. Nov 24 00:15:55.494729 systemd[1836]: Reached target timers.target - Timers. Nov 24 00:15:55.495763 systemd[1836]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:15:55.507493 systemd[1836]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:15:55.508435 systemd[1836]: Reached target sockets.target - Sockets. Nov 24 00:15:55.508564 systemd[1836]: Reached target basic.target - Basic System. Nov 24 00:15:55.508629 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:15:55.508644 systemd[1836]: Reached target default.target - Main User Target. Nov 24 00:15:55.508668 systemd[1836]: Startup finished in 196ms. Nov 24 00:15:55.516018 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:15:55.638152 waagent[1797]: 2025-11-24T00:15:55.638068Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 24 00:15:55.639914 waagent[1797]: 2025-11-24T00:15:55.639854Z INFO Daemon Daemon OS: flatcar 4459.2.1 Nov 24 00:15:55.641127 waagent[1797]: 2025-11-24T00:15:55.640197Z INFO Daemon Daemon Python: 3.11.13 Nov 24 00:15:55.642368 waagent[1797]: 2025-11-24T00:15:55.642299Z INFO Daemon Daemon Run daemon Nov 24 00:15:55.643452 waagent[1797]: 2025-11-24T00:15:55.643421Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.1' Nov 24 00:15:55.645390 waagent[1797]: 2025-11-24T00:15:55.645363Z INFO Daemon Daemon Using waagent for provisioning Nov 24 00:15:55.646735 waagent[1797]: 2025-11-24T00:15:55.646705Z INFO Daemon Daemon Activate resource disk Nov 24 00:15:55.649922 waagent[1797]: 2025-11-24T00:15:55.648103Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 24 00:15:55.650936 waagent[1797]: 2025-11-24T00:15:55.649881Z INFO Daemon Daemon Found device: None Nov 24 00:15:55.651372 waagent[1797]: 2025-11-24T00:15:55.651030Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 24 00:15:55.651372 waagent[1797]: 2025-11-24T00:15:55.651117Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 24 00:15:55.651372 waagent[1797]: 2025-11-24T00:15:55.651937Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:15:55.651372 waagent[1797]: 2025-11-24T00:15:55.652146Z INFO Daemon Daemon Running default provisioning handler Nov 24 00:15:55.671382 waagent[1797]: 2025-11-24T00:15:55.659994Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 24 00:15:55.671382 waagent[1797]: 2025-11-24T00:15:55.660551Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 24 00:15:55.671382 waagent[1797]: 2025-11-24T00:15:55.660705Z INFO Daemon Daemon cloud-init is enabled: False Nov 24 00:15:55.671382 waagent[1797]: 2025-11-24T00:15:55.660761Z INFO Daemon Daemon Copying ovf-env.xml Nov 24 00:15:55.709434 waagent[1797]: 2025-11-24T00:15:55.709110Z INFO Daemon Daemon Successfully mounted dvd Nov 24 00:15:55.735422 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 24 00:15:55.737191 waagent[1797]: 2025-11-24T00:15:55.737144Z INFO Daemon Daemon Detect protocol endpoint Nov 24 00:15:55.740558 waagent[1797]: 2025-11-24T00:15:55.737420Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 24 00:15:55.740558 waagent[1797]: 2025-11-24T00:15:55.737676Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 24 00:15:55.740558 waagent[1797]: 2025-11-24T00:15:55.737935Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 24 00:15:55.740558 waagent[1797]: 2025-11-24T00:15:55.738372Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 24 00:15:55.740558 waagent[1797]: 2025-11-24T00:15:55.738582Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 24 00:15:55.748904 waagent[1797]: 2025-11-24T00:15:55.748864Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 24 00:15:55.750536 waagent[1797]: 2025-11-24T00:15:55.749176Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 24 00:15:55.750536 waagent[1797]: 2025-11-24T00:15:55.749680Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 24 00:15:55.854770 waagent[1797]: 2025-11-24T00:15:55.854614Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 24 00:15:55.855995 waagent[1797]: 2025-11-24T00:15:55.854925Z INFO Daemon Daemon Forcing an update of the goal state. Nov 24 00:15:55.861681 waagent[1797]: 2025-11-24T00:15:55.861638Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:15:55.886404 waagent[1797]: 2025-11-24T00:15:55.886295Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Nov 24 00:15:55.889436 waagent[1797]: 2025-11-24T00:15:55.887069Z INFO Daemon Nov 24 00:15:55.889436 waagent[1797]: 2025-11-24T00:15:55.887316Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0e08bd1e-df75-48fe-a21c-7906ae0d627b eTag: 16525373273346592442 source: Fabric] Nov 24 00:15:55.889436 waagent[1797]: 2025-11-24T00:15:55.887934Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 24 00:15:55.889436 waagent[1797]: 2025-11-24T00:15:55.888229Z INFO Daemon Nov 24 00:15:55.889436 waagent[1797]: 2025-11-24T00:15:55.888458Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:15:55.896098 waagent[1797]: 2025-11-24T00:15:55.896068Z INFO Daemon Daemon Downloading artifacts profile blob Nov 24 00:15:56.054550 waagent[1797]: 2025-11-24T00:15:56.054474Z INFO Daemon Downloaded certificate {'thumbprint': '5477A0C144A045E772651FE208DC32A59BB943C3', 'hasPrivateKey': True} Nov 24 00:15:56.056771 waagent[1797]: 2025-11-24T00:15:56.056727Z INFO Daemon Fetch goal state completed Nov 24 00:15:56.107389 waagent[1797]: 2025-11-24T00:15:56.107208Z INFO Daemon Daemon Starting provisioning Nov 24 00:15:56.110782 waagent[1797]: 2025-11-24T00:15:56.107534Z INFO Daemon Daemon Handle ovf-env.xml. Nov 24 00:15:56.110782 waagent[1797]: 2025-11-24T00:15:56.107682Z INFO Daemon Daemon Set hostname [ci-4459.2.1-a-980c694365] Nov 24 00:15:56.122827 waagent[1797]: 2025-11-24T00:15:56.122772Z INFO Daemon Daemon Publish hostname [ci-4459.2.1-a-980c694365] Nov 24 00:15:56.130171 waagent[1797]: 2025-11-24T00:15:56.123176Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 24 00:15:56.130171 waagent[1797]: 2025-11-24T00:15:56.123445Z INFO Daemon Daemon Primary interface is [eth0] Nov 24 00:15:56.132310 systemd-networkd[1336]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:15:56.132319 systemd-networkd[1336]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:15:56.132353 systemd-networkd[1336]: eth0: DHCP lease lost Nov 24 00:15:56.133352 waagent[1797]: 2025-11-24T00:15:56.133299Z INFO Daemon Daemon Create user account if not exists Nov 24 00:15:56.135131 waagent[1797]: 2025-11-24T00:15:56.135092Z INFO Daemon Daemon User core already exists, skip useradd Nov 24 00:15:56.136332 waagent[1797]: 2025-11-24T00:15:56.135232Z INFO Daemon Daemon Configure sudoer Nov 24 00:15:56.140487 waagent[1797]: 2025-11-24T00:15:56.140440Z INFO Daemon Daemon Configure sshd Nov 24 00:15:56.146317 waagent[1797]: 2025-11-24T00:15:56.146275Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 24 00:15:56.150378 waagent[1797]: 2025-11-24T00:15:56.149892Z INFO Daemon Daemon Deploy ssh public key. Nov 24 00:15:56.149953 systemd-networkd[1336]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 24 00:15:56.265774 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 24 00:15:56.270570 systemd-logind[1682]: New session 2 of user core. Nov 24 00:15:56.276100 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:15:57.233524 waagent[1797]: 2025-11-24T00:15:57.233466Z INFO Daemon Daemon Provisioning complete Nov 24 00:15:57.243473 waagent[1797]: 2025-11-24T00:15:57.243435Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 24 00:15:57.249081 waagent[1797]: 2025-11-24T00:15:57.243666Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 24 00:15:57.249081 waagent[1797]: 2025-11-24T00:15:57.243947Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 24 00:15:57.351463 waagent[1886]: 2025-11-24T00:15:57.351372Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 24 00:15:57.351806 waagent[1886]: 2025-11-24T00:15:57.351500Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.1 Nov 24 00:15:57.351806 waagent[1886]: 2025-11-24T00:15:57.351541Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 24 00:15:57.351806 waagent[1886]: 2025-11-24T00:15:57.351579Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 24 00:15:57.393571 waagent[1886]: 2025-11-24T00:15:57.393493Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 24 00:15:57.393737 waagent[1886]: 2025-11-24T00:15:57.393708Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:15:57.393783 waagent[1886]: 2025-11-24T00:15:57.393767Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:15:57.400744 waagent[1886]: 2025-11-24T00:15:57.400683Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 24 00:15:57.407853 waagent[1886]: 2025-11-24T00:15:57.407816Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Nov 24 00:15:57.408234 waagent[1886]: 2025-11-24T00:15:57.408198Z INFO ExtHandler Nov 24 00:15:57.408295 waagent[1886]: 2025-11-24T00:15:57.408257Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: efe800cd-d1a7-42f6-941e-130275d175e9 eTag: 16525373273346592442 source: Fabric] Nov 24 00:15:57.408495 waagent[1886]: 2025-11-24T00:15:57.408468Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 24 00:15:57.408842 waagent[1886]: 2025-11-24T00:15:57.408816Z INFO ExtHandler Nov 24 00:15:57.408878 waagent[1886]: 2025-11-24T00:15:57.408855Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 24 00:15:57.412485 waagent[1886]: 2025-11-24T00:15:57.412450Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 24 00:15:57.484764 waagent[1886]: 2025-11-24T00:15:57.484650Z INFO ExtHandler Downloaded certificate {'thumbprint': '5477A0C144A045E772651FE208DC32A59BB943C3', 'hasPrivateKey': True} Nov 24 00:15:57.485145 waagent[1886]: 2025-11-24T00:15:57.485113Z INFO ExtHandler Fetch goal state completed Nov 24 00:15:57.498713 waagent[1886]: 2025-11-24T00:15:57.498659Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 24 00:15:57.502994 waagent[1886]: 2025-11-24T00:15:57.502938Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1886 Nov 24 00:15:57.503109 waagent[1886]: 2025-11-24T00:15:57.503087Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 24 00:15:57.503357 waagent[1886]: 2025-11-24T00:15:57.503332Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 24 00:15:57.504442 waagent[1886]: 2025-11-24T00:15:57.504404Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 24 00:15:57.504730 waagent[1886]: 2025-11-24T00:15:57.504703Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 24 00:15:57.504836 waagent[1886]: 2025-11-24T00:15:57.504815Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 24 00:15:57.505278 waagent[1886]: 2025-11-24T00:15:57.505254Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 24 00:15:57.560803 waagent[1886]: 2025-11-24T00:15:57.560765Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 24 00:15:57.561009 waagent[1886]: 2025-11-24T00:15:57.560985Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 24 00:15:57.566593 waagent[1886]: 2025-11-24T00:15:57.566561Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 24 00:15:57.572259 systemd[1]: Reload requested from client PID 1901 ('systemctl') (unit waagent.service)... Nov 24 00:15:57.572273 systemd[1]: Reloading... Nov 24 00:15:57.643953 zram_generator::config[1937]: No configuration found. Nov 24 00:15:57.804219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#228 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 24 00:15:57.838481 systemd[1]: Reloading finished in 265 ms. Nov 24 00:15:57.853043 waagent[1886]: 2025-11-24T00:15:57.852228Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 24 00:15:57.853043 waagent[1886]: 2025-11-24T00:15:57.852377Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 24 00:15:58.061213 waagent[1886]: 2025-11-24T00:15:58.061135Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 24 00:15:58.061517 waagent[1886]: 2025-11-24T00:15:58.061482Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 24 00:15:58.062285 waagent[1886]: 2025-11-24T00:15:58.062174Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 24 00:15:58.062566 waagent[1886]: 2025-11-24T00:15:58.062533Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 24 00:15:58.062782 waagent[1886]: 2025-11-24T00:15:58.062754Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:15:58.062858 waagent[1886]: 2025-11-24T00:15:58.062833Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 24 00:15:58.063032 waagent[1886]: 2025-11-24T00:15:58.062951Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:15:58.063096 waagent[1886]: 2025-11-24T00:15:58.063072Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 24 00:15:58.063187 waagent[1886]: 2025-11-24T00:15:58.063166Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 24 00:15:58.063505 waagent[1886]: 2025-11-24T00:15:58.063479Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 24 00:15:58.063610 waagent[1886]: 2025-11-24T00:15:58.063574Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 24 00:15:58.063714 waagent[1886]: 2025-11-24T00:15:58.063691Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 24 00:15:58.063776 waagent[1886]: 2025-11-24T00:15:58.063758Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 24 00:15:58.063910 waagent[1886]: 2025-11-24T00:15:58.063866Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 24 00:15:58.064220 waagent[1886]: 2025-11-24T00:15:58.064198Z INFO EnvHandler ExtHandler Configure routes Nov 24 00:15:58.064343 waagent[1886]: 2025-11-24T00:15:58.064307Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 24 00:15:58.064343 waagent[1886]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 24 00:15:58.064343 waagent[1886]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 24 00:15:58.064343 waagent[1886]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 24 00:15:58.064343 waagent[1886]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:15:58.064343 waagent[1886]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:15:58.064343 waagent[1886]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 24 00:15:58.065127 waagent[1886]: 2025-11-24T00:15:58.065099Z INFO EnvHandler ExtHandler Gateway:None Nov 24 00:15:58.065787 waagent[1886]: 2025-11-24T00:15:58.065731Z INFO EnvHandler ExtHandler Routes:None Nov 24 00:15:58.081390 waagent[1886]: 2025-11-24T00:15:58.081346Z INFO ExtHandler ExtHandler Nov 24 00:15:58.081462 waagent[1886]: 2025-11-24T00:15:58.081415Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7cea558f-d74a-4dcb-9ac3-a74189cdb4f3 correlation 4502aaf9-b732-4cfa-b10d-c2882857c77c created: 2025-11-24T00:14:50.964340Z] Nov 24 00:15:58.081711 waagent[1886]: 2025-11-24T00:15:58.081685Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 24 00:15:58.082134 waagent[1886]: 2025-11-24T00:15:58.082110Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 24 00:15:58.111560 waagent[1886]: 2025-11-24T00:15:58.111462Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 24 00:15:58.111560 waagent[1886]: Try `iptables -h' or 'iptables --help' for more information.) Nov 24 00:15:58.112334 waagent[1886]: 2025-11-24T00:15:58.112227Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BCB6F003-3D38-4707-BF98-B7B6F631D9DA;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 24 00:15:58.135635 waagent[1886]: 2025-11-24T00:15:58.135579Z INFO MonitorHandler ExtHandler Network interfaces: Nov 24 00:15:58.135635 waagent[1886]: Executing ['ip', '-a', '-o', 'link']: Nov 24 00:15:58.135635 waagent[1886]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 24 00:15:58.135635 waagent[1886]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:13:9a:b3 brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 24 00:15:58.135635 waagent[1886]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:13:9a:b3 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 24 00:15:58.135635 waagent[1886]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 24 00:15:58.135635 waagent[1886]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 24 00:15:58.135635 waagent[1886]: 2: eth0 inet 10.200.4.36/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 24 00:15:58.135635 waagent[1886]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 24 00:15:58.135635 waagent[1886]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 24 00:15:58.135635 waagent[1886]: 2: eth0 inet6 fe80::6245:bdff:fe13:9ab3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 24 00:15:58.180808 waagent[1886]: 2025-11-24T00:15:58.180751Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 24 00:15:58.180808 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.180808 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.180808 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.180808 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.180808 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.180808 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.180808 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:15:58.180808 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:15:58.180808 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:15:58.183852 waagent[1886]: 2025-11-24T00:15:58.183810Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 24 00:15:58.183852 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.183852 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.183852 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.183852 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.183852 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 24 00:15:58.183852 waagent[1886]: pkts bytes target prot opt in out source destination Nov 24 00:15:58.183852 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 24 00:15:58.183852 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 24 00:15:58.183852 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 24 00:16:05.212387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:16:05.213944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:05.748238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:05.754125 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:16:05.788262 kubelet[2038]: E1124 00:16:05.788214 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:16:05.791745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:16:05.791884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:16:05.792253 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.8M memory peak. Nov 24 00:16:07.482534 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:16:07.483604 systemd[1]: Started sshd@0-10.200.4.36:22-10.200.16.10:57586.service - OpenSSH per-connection server daemon (10.200.16.10:57586). Nov 24 00:16:08.150713 sshd[2047]: Accepted publickey for core from 10.200.16.10 port 57586 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:08.151865 sshd-session[2047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:08.155974 systemd-logind[1682]: New session 3 of user core. Nov 24 00:16:08.163043 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:16:08.677744 systemd[1]: Started sshd@1-10.200.4.36:22-10.200.16.10:57600.service - OpenSSH per-connection server daemon (10.200.16.10:57600). Nov 24 00:16:09.276198 sshd[2053]: Accepted publickey for core from 10.200.16.10 port 57600 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:09.277397 sshd-session[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:09.281785 systemd-logind[1682]: New session 4 of user core. Nov 24 00:16:09.290062 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:16:09.693501 sshd[2056]: Connection closed by 10.200.16.10 port 57600 Nov 24 00:16:09.694270 sshd-session[2053]: pam_unix(sshd:session): session closed for user core Nov 24 00:16:09.697397 systemd[1]: sshd@1-10.200.4.36:22-10.200.16.10:57600.service: Deactivated successfully. Nov 24 00:16:09.699069 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:16:09.701350 systemd-logind[1682]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:16:09.702275 systemd-logind[1682]: Removed session 4. Nov 24 00:16:09.797956 systemd[1]: Started sshd@2-10.200.4.36:22-10.200.16.10:57602.service - OpenSSH per-connection server daemon (10.200.16.10:57602). Nov 24 00:16:10.392867 sshd[2062]: Accepted publickey for core from 10.200.16.10 port 57602 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:10.394048 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:10.398649 systemd-logind[1682]: New session 5 of user core. Nov 24 00:16:10.404066 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:16:10.811128 sshd[2065]: Connection closed by 10.200.16.10 port 57602 Nov 24 00:16:10.811766 sshd-session[2062]: pam_unix(sshd:session): session closed for user core Nov 24 00:16:10.815705 systemd[1]: sshd@2-10.200.4.36:22-10.200.16.10:57602.service: Deactivated successfully. Nov 24 00:16:10.817266 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:16:10.817945 systemd-logind[1682]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:16:10.819328 systemd-logind[1682]: Removed session 5. Nov 24 00:16:10.920678 systemd[1]: Started sshd@3-10.200.4.36:22-10.200.16.10:38430.service - OpenSSH per-connection server daemon (10.200.16.10:38430). Nov 24 00:16:11.519390 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 38430 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:11.520566 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:11.525633 systemd-logind[1682]: New session 6 of user core. Nov 24 00:16:11.531081 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:16:11.951533 sshd[2074]: Connection closed by 10.200.16.10 port 38430 Nov 24 00:16:11.952275 sshd-session[2071]: pam_unix(sshd:session): session closed for user core Nov 24 00:16:11.955810 systemd[1]: sshd@3-10.200.4.36:22-10.200.16.10:38430.service: Deactivated successfully. Nov 24 00:16:11.957363 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:16:11.958034 systemd-logind[1682]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:16:11.959428 systemd-logind[1682]: Removed session 6. Nov 24 00:16:12.067855 systemd[1]: Started sshd@4-10.200.4.36:22-10.200.16.10:38446.service - OpenSSH per-connection server daemon (10.200.16.10:38446). Nov 24 00:16:12.669667 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 38446 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:12.670776 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:12.675332 systemd-logind[1682]: New session 7 of user core. Nov 24 00:16:12.680054 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:16:13.105009 sudo[2084]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:16:13.105238 sudo[2084]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:16:13.114735 sudo[2084]: pam_unix(sudo:session): session closed for user root Nov 24 00:16:13.208539 sshd[2083]: Connection closed by 10.200.16.10 port 38446 Nov 24 00:16:13.209615 sshd-session[2080]: pam_unix(sshd:session): session closed for user core Nov 24 00:16:13.212963 systemd[1]: sshd@4-10.200.4.36:22-10.200.16.10:38446.service: Deactivated successfully. Nov 24 00:16:13.214563 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:16:13.216314 systemd-logind[1682]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:16:13.217140 systemd-logind[1682]: Removed session 7. Nov 24 00:16:13.316883 systemd[1]: Started sshd@5-10.200.4.36:22-10.200.16.10:38452.service - OpenSSH per-connection server daemon (10.200.16.10:38452). Nov 24 00:16:13.917085 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 38452 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:13.918289 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:13.922971 systemd-logind[1682]: New session 8 of user core. Nov 24 00:16:13.931119 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:16:14.242277 sudo[2095]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:16:14.242510 sudo[2095]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:16:14.249312 sudo[2095]: pam_unix(sudo:session): session closed for user root Nov 24 00:16:14.253495 sudo[2094]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:16:14.253718 sudo[2094]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:16:14.264189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:16:14.298073 augenrules[2117]: No rules Nov 24 00:16:14.299125 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:16:14.299396 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:16:14.301059 sudo[2094]: pam_unix(sudo:session): session closed for user root Nov 24 00:16:14.406174 sshd[2093]: Connection closed by 10.200.16.10 port 38452 Nov 24 00:16:14.406745 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Nov 24 00:16:14.410484 systemd[1]: sshd@5-10.200.4.36:22-10.200.16.10:38452.service: Deactivated successfully. Nov 24 00:16:14.412084 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:16:14.412800 systemd-logind[1682]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:16:14.414092 systemd-logind[1682]: Removed session 8. Nov 24 00:16:14.512868 systemd[1]: Started sshd@6-10.200.4.36:22-10.200.16.10:38460.service - OpenSSH per-connection server daemon (10.200.16.10:38460). Nov 24 00:16:15.124579 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 38460 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:16:15.125932 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:16:15.130538 systemd-logind[1682]: New session 9 of user core. Nov 24 00:16:15.139079 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:16:15.449225 sudo[2130]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:16:15.449452 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:16:15.962203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:16:15.963876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:16.560333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:16.570314 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:16:16.603943 kubelet[2152]: E1124 00:16:16.603885 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:16:16.605950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:16:16.606099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:16:16.606446 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110.2M memory peak. Nov 24 00:16:16.774647 chronyd[1662]: Selected source PHC0 Nov 24 00:16:16.916807 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:16:16.933307 (dockerd)[2164]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:16:17.867421 dockerd[2164]: time="2025-11-24T00:16:17.867159748Z" level=info msg="Starting up" Nov 24 00:16:17.870609 dockerd[2164]: time="2025-11-24T00:16:17.870570493Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:16:17.879696 dockerd[2164]: time="2025-11-24T00:16:17.879653158Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:16:18.013907 dockerd[2164]: time="2025-11-24T00:16:18.013835532Z" level=info msg="Loading containers: start." Nov 24 00:16:18.052002 kernel: Initializing XFRM netlink socket Nov 24 00:16:18.341392 systemd-networkd[1336]: docker0: Link UP Nov 24 00:16:18.356445 dockerd[2164]: time="2025-11-24T00:16:18.356405252Z" level=info msg="Loading containers: done." Nov 24 00:16:18.374610 dockerd[2164]: time="2025-11-24T00:16:18.374561766Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:16:18.374767 dockerd[2164]: time="2025-11-24T00:16:18.374651932Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:16:18.374767 dockerd[2164]: time="2025-11-24T00:16:18.374728832Z" level=info msg="Initializing buildkit" Nov 24 00:16:18.444322 dockerd[2164]: time="2025-11-24T00:16:18.444267172Z" level=info msg="Completed buildkit initialization" Nov 24 00:16:18.451431 dockerd[2164]: time="2025-11-24T00:16:18.451377729Z" level=info msg="Daemon has completed initialization" Nov 24 00:16:18.451704 dockerd[2164]: time="2025-11-24T00:16:18.451451277Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:16:18.451816 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:16:19.608947 containerd[1703]: time="2025-11-24T00:16:19.608891864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:16:20.450693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99289492.mount: Deactivated successfully. Nov 24 00:16:21.734583 containerd[1703]: time="2025-11-24T00:16:21.734529462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:21.737288 containerd[1703]: time="2025-11-24T00:16:21.737143009Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113221" Nov 24 00:16:21.740394 containerd[1703]: time="2025-11-24T00:16:21.740368259Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:21.744400 containerd[1703]: time="2025-11-24T00:16:21.744363171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:21.745158 containerd[1703]: time="2025-11-24T00:16:21.744973685Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 2.136027603s" Nov 24 00:16:21.745158 containerd[1703]: time="2025-11-24T00:16:21.745006888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:16:21.745722 containerd[1703]: time="2025-11-24T00:16:21.745703123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:16:26.712070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 24 00:16:26.713798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:27.265817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:27.277127 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:16:27.312601 kubelet[2436]: E1124 00:16:27.312555 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:16:27.314601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:16:27.314750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:16:27.315131 systemd[1]: kubelet.service: Consumed 138ms CPU time, 109.8M memory peak. Nov 24 00:16:29.263022 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 24 00:16:31.542135 containerd[1703]: time="2025-11-24T00:16:31.542081974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:31.545692 containerd[1703]: time="2025-11-24T00:16:31.545654413Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018115" Nov 24 00:16:31.549911 containerd[1703]: time="2025-11-24T00:16:31.548756586Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:31.556441 containerd[1703]: time="2025-11-24T00:16:31.556369092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:31.557015 containerd[1703]: time="2025-11-24T00:16:31.556988200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 9.811187018s" Nov 24 00:16:31.557083 containerd[1703]: time="2025-11-24T00:16:31.557025517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:16:31.557607 containerd[1703]: time="2025-11-24T00:16:31.557585108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:16:37.462214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 24 00:16:37.463794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:37.946042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:37.955130 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:16:37.986599 kubelet[2456]: E1124 00:16:37.986554 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:16:37.988517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:16:37.988652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:16:37.989022 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107.9M memory peak. Nov 24 00:16:38.706669 update_engine[1683]: I20251124 00:16:38.706586 1683 update_attempter.cc:509] Updating boot flags... Nov 24 00:16:42.036029 containerd[1703]: time="2025-11-24T00:16:42.035979353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:42.038817 containerd[1703]: time="2025-11-24T00:16:42.038665354Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156490" Nov 24 00:16:42.048127 containerd[1703]: time="2025-11-24T00:16:42.048094131Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:42.052623 containerd[1703]: time="2025-11-24T00:16:42.052593945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:42.053782 containerd[1703]: time="2025-11-24T00:16:42.053327443Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 10.495710971s" Nov 24 00:16:42.053782 containerd[1703]: time="2025-11-24T00:16:42.053360330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:16:42.054019 containerd[1703]: time="2025-11-24T00:16:42.054001641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:16:46.972104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226435001.mount: Deactivated successfully. Nov 24 00:16:47.364283 containerd[1703]: time="2025-11-24T00:16:47.364158633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:47.367072 containerd[1703]: time="2025-11-24T00:16:47.367034298Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929146" Nov 24 00:16:47.370110 containerd[1703]: time="2025-11-24T00:16:47.370064854Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:47.380330 containerd[1703]: time="2025-11-24T00:16:47.380286963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:47.380920 containerd[1703]: time="2025-11-24T00:16:47.380670299Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 5.326640031s" Nov 24 00:16:47.380920 containerd[1703]: time="2025-11-24T00:16:47.380702820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:16:47.381221 containerd[1703]: time="2025-11-24T00:16:47.381204182Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:16:48.079533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 24 00:16:48.082022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:48.089874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293995357.mount: Deactivated successfully. Nov 24 00:16:48.511995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:48.525169 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:16:48.559714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:16:48.612377 kubelet[2511]: E1124 00:16:48.557986 2511 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:16:48.559824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:16:48.560140 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.1M memory peak. Nov 24 00:16:49.614270 containerd[1703]: time="2025-11-24T00:16:49.614216495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:49.617252 containerd[1703]: time="2025-11-24T00:16:49.617217262Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 24 00:16:49.620496 containerd[1703]: time="2025-11-24T00:16:49.620454327Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:49.632745 containerd[1703]: time="2025-11-24T00:16:49.632697720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:49.633574 containerd[1703]: time="2025-11-24T00:16:49.633423258Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.252178477s" Nov 24 00:16:49.633574 containerd[1703]: time="2025-11-24T00:16:49.633453754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:16:49.634031 containerd[1703]: time="2025-11-24T00:16:49.634002314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:16:50.231700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60758272.mount: Deactivated successfully. Nov 24 00:16:50.250679 containerd[1703]: time="2025-11-24T00:16:50.250627835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:16:50.255537 containerd[1703]: time="2025-11-24T00:16:50.255489591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 24 00:16:50.260175 containerd[1703]: time="2025-11-24T00:16:50.260027732Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:16:50.264773 containerd[1703]: time="2025-11-24T00:16:50.264723387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:16:50.265513 containerd[1703]: time="2025-11-24T00:16:50.265226057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 631.122128ms" Nov 24 00:16:50.265513 containerd[1703]: time="2025-11-24T00:16:50.265254684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:16:50.265821 containerd[1703]: time="2025-11-24T00:16:50.265781759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:16:51.022673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161815031.mount: Deactivated successfully. Nov 24 00:16:52.815330 containerd[1703]: time="2025-11-24T00:16:52.815272476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:52.818306 containerd[1703]: time="2025-11-24T00:16:52.818161469Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Nov 24 00:16:52.822178 containerd[1703]: time="2025-11-24T00:16:52.822153489Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:52.826659 containerd[1703]: time="2025-11-24T00:16:52.826429725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:16:52.827171 containerd[1703]: time="2025-11-24T00:16:52.827146603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.561336916s" Nov 24 00:16:52.827214 containerd[1703]: time="2025-11-24T00:16:52.827180649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:16:56.095636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:56.096162 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.1M memory peak. Nov 24 00:16:56.100159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:56.122985 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-9.scope)... Nov 24 00:16:56.122999 systemd[1]: Reloading... Nov 24 00:16:56.202963 zram_generator::config[2701]: No configuration found. Nov 24 00:16:56.401694 systemd[1]: Reloading finished in 278 ms. Nov 24 00:16:56.462516 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:16:56.462615 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:16:56.463000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:56.464641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:16:57.194523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:16:57.206234 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:16:57.245285 kubelet[2767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:16:57.245285 kubelet[2767]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:16:57.245285 kubelet[2767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:16:57.245651 kubelet[2767]: I1124 00:16:57.245321 2767 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:16:57.923557 kubelet[2767]: I1124 00:16:57.922200 2767 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:16:57.923557 kubelet[2767]: I1124 00:16:57.922234 2767 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:16:57.923557 kubelet[2767]: I1124 00:16:57.922619 2767 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:16:57.950371 kubelet[2767]: I1124 00:16:57.950341 2767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:16:57.951018 kubelet[2767]: E1124 00:16:57.950990 2767 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:16:57.956343 kubelet[2767]: I1124 00:16:57.956329 2767 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:16:57.958843 kubelet[2767]: I1124 00:16:57.958822 2767 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:16:57.959080 kubelet[2767]: I1124 00:16:57.959047 2767 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:16:57.959234 kubelet[2767]: I1124 00:16:57.959078 2767 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-980c694365","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:16:57.959350 kubelet[2767]: I1124 00:16:57.959235 2767 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:16:57.959350 kubelet[2767]: I1124 00:16:57.959245 2767 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:16:57.960118 kubelet[2767]: I1124 00:16:57.960100 2767 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:16:57.962957 kubelet[2767]: I1124 00:16:57.962549 2767 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:16:57.962957 kubelet[2767]: I1124 00:16:57.962595 2767 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:16:57.962957 kubelet[2767]: I1124 00:16:57.962623 2767 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:16:57.962957 kubelet[2767]: I1124 00:16:57.962638 2767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:16:57.969349 kubelet[2767]: I1124 00:16:57.969331 2767 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:16:57.969991 kubelet[2767]: I1124 00:16:57.969925 2767 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:16:57.971084 kubelet[2767]: W1124 00:16:57.971074 2767 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:16:57.973694 kubelet[2767]: I1124 00:16:57.973681 2767 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:16:57.973831 kubelet[2767]: I1124 00:16:57.973824 2767 server.go:1289] "Started kubelet" Nov 24 00:16:57.974099 kubelet[2767]: E1124 00:16:57.974081 2767 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-980c694365&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:16:57.976222 kubelet[2767]: E1124 00:16:57.975567 2767 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:16:57.976222 kubelet[2767]: I1124 00:16:57.975664 2767 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:16:57.977223 kubelet[2767]: I1124 00:16:57.976582 2767 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:16:57.977833 kubelet[2767]: I1124 00:16:57.977783 2767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:16:57.978207 kubelet[2767]: I1124 00:16:57.978194 2767 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:16:57.981869 kubelet[2767]: E1124 00:16:57.980663 2767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-a-980c694365.187ac93160376e9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-a-980c694365,UID:ci-4459.2.1-a-980c694365,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-a-980c694365,},FirstTimestamp:2025-11-24 00:16:57.973796506 +0000 UTC m=+0.763720665,LastTimestamp:2025-11-24 00:16:57.973796506 +0000 UTC m=+0.763720665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-a-980c694365,}" Nov 24 00:16:57.984125 kubelet[2767]: I1124 00:16:57.984021 2767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:16:57.985371 kubelet[2767]: I1124 00:16:57.985356 2767 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:16:57.986705 kubelet[2767]: I1124 00:16:57.986689 2767 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:16:57.986954 kubelet[2767]: E1124 00:16:57.986940 2767 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-980c694365\" not found" Nov 24 00:16:57.987927 kubelet[2767]: I1124 00:16:57.987914 2767 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:16:57.987982 kubelet[2767]: I1124 00:16:57.987977 2767 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:16:57.989453 kubelet[2767]: E1124 00:16:57.989432 2767 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:16:57.989785 kubelet[2767]: E1124 00:16:57.989757 2767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-980c694365?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="200ms" Nov 24 00:16:57.990221 kubelet[2767]: E1124 00:16:57.990105 2767 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:16:57.990740 kubelet[2767]: I1124 00:16:57.990725 2767 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:16:57.990868 kubelet[2767]: I1124 00:16:57.990855 2767 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:16:57.991670 kubelet[2767]: I1124 00:16:57.991657 2767 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:16:58.013206 kubelet[2767]: I1124 00:16:58.013193 2767 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:16:58.013438 kubelet[2767]: I1124 00:16:58.013240 2767 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:16:58.013438 kubelet[2767]: I1124 00:16:58.013255 2767 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:16:58.018410 kubelet[2767]: I1124 00:16:58.018397 2767 policy_none.go:49] "None policy: Start" Nov 24 00:16:58.018476 kubelet[2767]: I1124 00:16:58.018471 2767 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:16:58.018506 kubelet[2767]: I1124 00:16:58.018502 2767 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:16:58.026772 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:16:58.037872 kubelet[2767]: I1124 00:16:58.037847 2767 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:16:58.039182 kubelet[2767]: I1124 00:16:58.039144 2767 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:16:58.039182 kubelet[2767]: I1124 00:16:58.039170 2767 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:16:58.039271 kubelet[2767]: I1124 00:16:58.039188 2767 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:16:58.039271 kubelet[2767]: I1124 00:16:58.039195 2767 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:16:58.039271 kubelet[2767]: E1124 00:16:58.039230 2767 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:16:58.043662 kubelet[2767]: E1124 00:16:58.043639 2767 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:16:58.046214 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:16:58.049412 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:16:58.065721 kubelet[2767]: E1124 00:16:58.065701 2767 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:16:58.066150 kubelet[2767]: I1124 00:16:58.066004 2767 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:16:58.066150 kubelet[2767]: I1124 00:16:58.066017 2767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:16:58.066635 kubelet[2767]: I1124 00:16:58.066430 2767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:16:58.067505 kubelet[2767]: E1124 00:16:58.067483 2767 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:16:58.067619 kubelet[2767]: E1124 00:16:58.067611 2767 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-a-980c694365\" not found" Nov 24 00:16:58.152100 systemd[1]: Created slice kubepods-burstable-pod2fb74a2edc0afd4a3cfa699268e8e726.slice - libcontainer container kubepods-burstable-pod2fb74a2edc0afd4a3cfa699268e8e726.slice. Nov 24 00:16:58.161509 kubelet[2767]: E1124 00:16:58.161473 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.165058 systemd[1]: Created slice kubepods-burstable-pod1673f0686e5c68f5e9022793eb3b81e9.slice - libcontainer container kubepods-burstable-pod1673f0686e5c68f5e9022793eb3b81e9.slice. Nov 24 00:16:58.168175 kubelet[2767]: I1124 00:16:58.168138 2767 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.168503 kubelet[2767]: E1124 00:16:58.168480 2767 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.174059 kubelet[2767]: E1124 00:16:58.173978 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.177433 systemd[1]: Created slice kubepods-burstable-pod420786b88d0f5eff5ecd40055bf5ed10.slice - libcontainer container kubepods-burstable-pod420786b88d0f5eff5ecd40055bf5ed10.slice. Nov 24 00:16:58.179161 kubelet[2767]: E1124 00:16:58.179140 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189373 kubelet[2767]: I1124 00:16:58.189298 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189373 kubelet[2767]: I1124 00:16:58.189355 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189493 kubelet[2767]: I1124 00:16:58.189378 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189493 kubelet[2767]: I1124 00:16:58.189426 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189493 kubelet[2767]: I1124 00:16:58.189445 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189493 kubelet[2767]: I1124 00:16:58.189461 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189493 kubelet[2767]: I1124 00:16:58.189480 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189602 kubelet[2767]: I1124 00:16:58.189499 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.189602 kubelet[2767]: I1124 00:16:58.189517 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fb74a2edc0afd4a3cfa699268e8e726-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-980c694365\" (UID: \"2fb74a2edc0afd4a3cfa699268e8e726\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:16:58.190753 kubelet[2767]: E1124 00:16:58.190713 2767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-980c694365?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="400ms" Nov 24 00:16:58.370650 kubelet[2767]: I1124 00:16:58.370586 2767 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.376585 kubelet[2767]: E1124 00:16:58.376543 2767 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.462904 containerd[1703]: time="2025-11-24T00:16:58.462862596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-980c694365,Uid:2fb74a2edc0afd4a3cfa699268e8e726,Namespace:kube-system,Attempt:0,}" Nov 24 00:16:58.475607 containerd[1703]: time="2025-11-24T00:16:58.475363211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-980c694365,Uid:1673f0686e5c68f5e9022793eb3b81e9,Namespace:kube-system,Attempt:0,}" Nov 24 00:16:58.480424 containerd[1703]: time="2025-11-24T00:16:58.480398508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-980c694365,Uid:420786b88d0f5eff5ecd40055bf5ed10,Namespace:kube-system,Attempt:0,}" Nov 24 00:16:58.529983 containerd[1703]: time="2025-11-24T00:16:58.529939930Z" level=info msg="connecting to shim 0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f" address="unix:///run/containerd/s/a6990fd098db3c09c3db90be658097b7e17a17bd946963def573e1e54cbc5b90" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:16:58.544791 containerd[1703]: time="2025-11-24T00:16:58.544309630Z" level=info msg="connecting to shim 11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1" address="unix:///run/containerd/s/5146abe130fbdc0c46e7c04ba6bb296377b06947ab58b622000bc3c91b2b504c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:16:58.575296 containerd[1703]: time="2025-11-24T00:16:58.575251605Z" level=info msg="connecting to shim dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42" address="unix:///run/containerd/s/52f07b8aa855e01f46fa23e055ce2ff8cf7592de4a5530fe08a98c91e81a123d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:16:58.577300 systemd[1]: Started cri-containerd-0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f.scope - libcontainer container 0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f. Nov 24 00:16:58.591846 kubelet[2767]: E1124 00:16:58.591814 2767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-980c694365?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="800ms" Nov 24 00:16:58.593199 systemd[1]: Started cri-containerd-11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1.scope - libcontainer container 11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1. Nov 24 00:16:58.616152 systemd[1]: Started cri-containerd-dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42.scope - libcontainer container dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42. Nov 24 00:16:58.663095 containerd[1703]: time="2025-11-24T00:16:58.662981899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-980c694365,Uid:2fb74a2edc0afd4a3cfa699268e8e726,Namespace:kube-system,Attempt:0,} returns sandbox id \"0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f\"" Nov 24 00:16:58.684386 containerd[1703]: time="2025-11-24T00:16:58.683944464Z" level=info msg="CreateContainer within sandbox \"0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:16:58.700826 containerd[1703]: time="2025-11-24T00:16:58.700790572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-980c694365,Uid:420786b88d0f5eff5ecd40055bf5ed10,Namespace:kube-system,Attempt:0,} returns sandbox id \"11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1\"" Nov 24 00:16:58.704093 containerd[1703]: time="2025-11-24T00:16:58.704063933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-980c694365,Uid:1673f0686e5c68f5e9022793eb3b81e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42\"" Nov 24 00:16:58.709249 containerd[1703]: time="2025-11-24T00:16:58.709221329Z" level=info msg="CreateContainer within sandbox \"11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:16:58.712684 containerd[1703]: time="2025-11-24T00:16:58.712654763Z" level=info msg="Container 3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:16:58.728849 containerd[1703]: time="2025-11-24T00:16:58.728675949Z" level=info msg="CreateContainer within sandbox \"dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:16:58.730727 containerd[1703]: time="2025-11-24T00:16:58.730699263Z" level=info msg="Container ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:16:58.766889 containerd[1703]: time="2025-11-24T00:16:58.766857642Z" level=info msg="CreateContainer within sandbox \"0536d9ba875dd2a872d094a77d622d7c94dae5f5706e998570c85971d186484f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20\"" Nov 24 00:16:58.767550 containerd[1703]: time="2025-11-24T00:16:58.767507644Z" level=info msg="StartContainer for \"3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20\"" Nov 24 00:16:58.768649 containerd[1703]: time="2025-11-24T00:16:58.768622176Z" level=info msg="connecting to shim 3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20" address="unix:///run/containerd/s/a6990fd098db3c09c3db90be658097b7e17a17bd946963def573e1e54cbc5b90" protocol=ttrpc version=3 Nov 24 00:16:58.770138 containerd[1703]: time="2025-11-24T00:16:58.770057563Z" level=info msg="Container 3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:16:58.778426 kubelet[2767]: I1124 00:16:58.778385 2767 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.778970 kubelet[2767]: E1124 00:16:58.778700 2767 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:58.783272 containerd[1703]: time="2025-11-24T00:16:58.783232309Z" level=info msg="CreateContainer within sandbox \"11295a5791fb6700b0c02c61341b91df4a5903b60d03da182e773ff9a6aba1d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a\"" Nov 24 00:16:58.784109 containerd[1703]: time="2025-11-24T00:16:58.783749910Z" level=info msg="StartContainer for \"ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a\"" Nov 24 00:16:58.787117 containerd[1703]: time="2025-11-24T00:16:58.786969901Z" level=info msg="connecting to shim ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a" address="unix:///run/containerd/s/5146abe130fbdc0c46e7c04ba6bb296377b06947ab58b622000bc3c91b2b504c" protocol=ttrpc version=3 Nov 24 00:16:58.792559 containerd[1703]: time="2025-11-24T00:16:58.792530296Z" level=info msg="CreateContainer within sandbox \"dcecb809746b14ad02c31d9d28d0cea871d991b939288531ba95cc9b5ef97c42\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e\"" Nov 24 00:16:58.793247 systemd[1]: Started cri-containerd-3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20.scope - libcontainer container 3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20. Nov 24 00:16:58.793768 containerd[1703]: time="2025-11-24T00:16:58.793741785Z" level=info msg="StartContainer for \"3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e\"" Nov 24 00:16:58.796082 containerd[1703]: time="2025-11-24T00:16:58.794884492Z" level=info msg="connecting to shim 3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e" address="unix:///run/containerd/s/52f07b8aa855e01f46fa23e055ce2ff8cf7592de4a5530fe08a98c91e81a123d" protocol=ttrpc version=3 Nov 24 00:16:58.819073 systemd[1]: Started cri-containerd-ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a.scope - libcontainer container ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a. Nov 24 00:16:58.827214 systemd[1]: Started cri-containerd-3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e.scope - libcontainer container 3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e. Nov 24 00:16:58.903788 containerd[1703]: time="2025-11-24T00:16:58.903749187Z" level=info msg="StartContainer for \"3e41f34b31952150f71265998d9d3b04afa70699f2c9f54985d7096e90f6cd20\" returns successfully" Nov 24 00:16:58.917587 containerd[1703]: time="2025-11-24T00:16:58.917544683Z" level=info msg="StartContainer for \"ab24b53fb7de1e4a5250ce1717764e13750e973a40f916d820bd3d3ed25d238a\" returns successfully" Nov 24 00:16:58.919097 containerd[1703]: time="2025-11-24T00:16:58.919064095Z" level=info msg="StartContainer for \"3cf65ed2d90c908cc67f4465ad5f6ade84b90ee3408970c0c156d0329427cd0e\" returns successfully" Nov 24 00:16:59.053736 kubelet[2767]: E1124 00:16:59.053637 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:59.055986 kubelet[2767]: E1124 00:16:59.055760 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:59.059457 kubelet[2767]: E1124 00:16:59.059435 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:16:59.581301 kubelet[2767]: I1124 00:16:59.581271 2767 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:00.062749 kubelet[2767]: E1124 00:17:00.062717 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:00.063053 kubelet[2767]: E1124 00:17:00.063037 2767 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:00.806675 kubelet[2767]: E1124 00:17:00.806636 2767 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-a-980c694365\" not found" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:00.880134 kubelet[2767]: I1124 00:17:00.880097 2767 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:00.887942 kubelet[2767]: I1124 00:17:00.887915 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.898110 kubelet[2767]: E1124 00:17:00.898080 2767 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-980c694365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.898110 kubelet[2767]: I1124 00:17:00.898106 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.899926 kubelet[2767]: E1124 00:17:00.899772 2767 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-980c694365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.899926 kubelet[2767]: I1124 00:17:00.899793 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.901849 kubelet[2767]: E1124 00:17:00.901741 2767 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-980c694365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:00.977596 kubelet[2767]: I1124 00:17:00.977357 2767 apiserver.go:52] "Watching apiserver" Nov 24 00:17:00.989983 kubelet[2767]: I1124 00:17:00.989957 2767 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:17:02.641943 kubelet[2767]: I1124 00:17:02.641888 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:02.679913 kubelet[2767]: I1124 00:17:02.679857 2767 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:04.158524 systemd[1]: Reload requested from client PID 3050 ('systemctl') (unit session-9.scope)... Nov 24 00:17:04.158538 systemd[1]: Reloading... Nov 24 00:17:04.250939 zram_generator::config[3097]: No configuration found. Nov 24 00:17:04.466860 systemd[1]: Reloading finished in 308 ms. Nov 24 00:17:04.509535 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:04.532367 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:17:04.532839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:04.533016 systemd[1]: kubelet.service: Consumed 1.146s CPU time, 131.7M memory peak. Nov 24 00:17:04.536195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:17:05.011096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:17:05.022178 (kubelet)[3164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:17:05.062343 kubelet[3164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:17:05.062343 kubelet[3164]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:17:05.062343 kubelet[3164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:17:05.062733 kubelet[3164]: I1124 00:17:05.062451 3164 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:17:05.068594 kubelet[3164]: I1124 00:17:05.068568 3164 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:17:05.068594 kubelet[3164]: I1124 00:17:05.068588 3164 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:17:05.068829 kubelet[3164]: I1124 00:17:05.068795 3164 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:17:05.069961 kubelet[3164]: I1124 00:17:05.069754 3164 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:17:05.073582 kubelet[3164]: I1124 00:17:05.072970 3164 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:17:05.076619 kubelet[3164]: I1124 00:17:05.076599 3164 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:17:05.079976 kubelet[3164]: I1124 00:17:05.079612 3164 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:17:05.079976 kubelet[3164]: I1124 00:17:05.079794 3164 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:17:05.080127 kubelet[3164]: I1124 00:17:05.079814 3164 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-980c694365","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:17:05.080221 kubelet[3164]: I1124 00:17:05.080141 3164 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:17:05.080221 kubelet[3164]: I1124 00:17:05.080152 3164 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:17:05.080221 kubelet[3164]: I1124 00:17:05.080198 3164 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:17:05.080358 kubelet[3164]: I1124 00:17:05.080348 3164 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:17:05.080388 kubelet[3164]: I1124 00:17:05.080362 3164 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:17:05.080388 kubelet[3164]: I1124 00:17:05.080386 3164 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:17:05.080432 kubelet[3164]: I1124 00:17:05.080400 3164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:17:05.083835 kubelet[3164]: I1124 00:17:05.083737 3164 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:17:05.085207 kubelet[3164]: I1124 00:17:05.084568 3164 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:17:05.091070 kubelet[3164]: I1124 00:17:05.091053 3164 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:17:05.091148 kubelet[3164]: I1124 00:17:05.091104 3164 server.go:1289] "Started kubelet" Nov 24 00:17:05.093034 kubelet[3164]: I1124 00:17:05.092954 3164 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:17:05.093786 kubelet[3164]: I1124 00:17:05.093771 3164 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:17:05.094295 kubelet[3164]: I1124 00:17:05.094281 3164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:17:05.097887 kubelet[3164]: I1124 00:17:05.097741 3164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:17:05.097982 kubelet[3164]: I1124 00:17:05.097967 3164 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:17:05.106072 kubelet[3164]: I1124 00:17:05.104699 3164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:17:05.106072 kubelet[3164]: I1124 00:17:05.105272 3164 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:17:05.107922 kubelet[3164]: I1124 00:17:05.106981 3164 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:17:05.107922 kubelet[3164]: I1124 00:17:05.107080 3164 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:17:05.108927 kubelet[3164]: I1124 00:17:05.108694 3164 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:17:05.110030 kubelet[3164]: I1124 00:17:05.109716 3164 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:17:05.110030 kubelet[3164]: I1124 00:17:05.109737 3164 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:17:05.110030 kubelet[3164]: I1124 00:17:05.109761 3164 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:17:05.110030 kubelet[3164]: I1124 00:17:05.109769 3164 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:17:05.110030 kubelet[3164]: E1124 00:17:05.109803 3164 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:17:05.113392 kubelet[3164]: I1124 00:17:05.113354 3164 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:17:05.113590 kubelet[3164]: I1124 00:17:05.113575 3164 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:17:05.117014 kubelet[3164]: I1124 00:17:05.116999 3164 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:17:05.118566 kubelet[3164]: E1124 00:17:05.118545 3164 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:17:05.163534 kubelet[3164]: I1124 00:17:05.163512 3164 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:17:05.163534 kubelet[3164]: I1124 00:17:05.163528 3164 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163546 3164 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163667 3164 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163674 3164 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163691 3164 policy_none.go:49] "None policy: Start" Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163700 3164 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:17:05.163750 kubelet[3164]: I1124 00:17:05.163709 3164 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:17:05.163923 kubelet[3164]: I1124 00:17:05.163796 3164 state_mem.go:75] "Updated machine memory state" Nov 24 00:17:05.166874 kubelet[3164]: E1124 00:17:05.166852 3164 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:17:05.167338 kubelet[3164]: I1124 00:17:05.167014 3164 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:17:05.167338 kubelet[3164]: I1124 00:17:05.167025 3164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:17:05.167338 kubelet[3164]: I1124 00:17:05.167178 3164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:17:05.169760 kubelet[3164]: E1124 00:17:05.169236 3164 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:17:05.210782 kubelet[3164]: I1124 00:17:05.210757 3164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.211941 kubelet[3164]: I1124 00:17:05.210757 3164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.212041 kubelet[3164]: I1124 00:17:05.212018 3164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.231735 kubelet[3164]: I1124 00:17:05.231713 3164 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:05.232101 kubelet[3164]: I1124 00:17:05.232011 3164 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:05.275848 kubelet[3164]: I1124 00:17:05.275764 3164 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307449 kubelet[3164]: I1124 00:17:05.307413 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307449 kubelet[3164]: I1124 00:17:05.307450 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307449 kubelet[3164]: I1124 00:17:05.307478 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307449 kubelet[3164]: I1124 00:17:05.307497 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307744 kubelet[3164]: I1124 00:17:05.307516 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307744 kubelet[3164]: I1124 00:17:05.307532 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1673f0686e5c68f5e9022793eb3b81e9-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-980c694365\" (UID: \"1673f0686e5c68f5e9022793eb3b81e9\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307744 kubelet[3164]: I1124 00:17:05.307549 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307744 kubelet[3164]: I1124 00:17:05.307568 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/420786b88d0f5eff5ecd40055bf5ed10-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-980c694365\" (UID: \"420786b88d0f5eff5ecd40055bf5ed10\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.307744 kubelet[3164]: I1124 00:17:05.307586 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fb74a2edc0afd4a3cfa699268e8e726-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-980c694365\" (UID: \"2fb74a2edc0afd4a3cfa699268e8e726\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.320805 kubelet[3164]: I1124 00:17:05.319641 3164 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:05.320805 kubelet[3164]: E1124 00:17:05.319864 3164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-980c694365\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:05.368043 kubelet[3164]: I1124 00:17:05.368008 3164 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:05.368193 kubelet[3164]: I1124 00:17:05.368111 3164 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-980c694365" Nov 24 00:17:06.082758 kubelet[3164]: I1124 00:17:06.082714 3164 apiserver.go:52] "Watching apiserver" Nov 24 00:17:06.107140 kubelet[3164]: I1124 00:17:06.107115 3164 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:17:06.150199 kubelet[3164]: I1124 00:17:06.150172 3164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:06.150621 kubelet[3164]: I1124 00:17:06.150603 3164 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:06.166703 kubelet[3164]: I1124 00:17:06.166652 3164 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:06.166862 kubelet[3164]: E1124 00:17:06.166772 3164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-980c694365\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" Nov 24 00:17:06.174810 kubelet[3164]: I1124 00:17:06.174753 3164 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 00:17:06.175446 kubelet[3164]: E1124 00:17:06.175416 3164 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-980c694365\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" Nov 24 00:17:06.177944 kubelet[3164]: I1124 00:17:06.176147 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-980c694365" podStartSLOduration=1.176134099 podStartE2EDuration="1.176134099s" podCreationTimestamp="2025-11-24 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:06.176014599 +0000 UTC m=+1.149036010" watchObservedRunningTime="2025-11-24 00:17:06.176134099 +0000 UTC m=+1.149155512" Nov 24 00:17:06.268702 kubelet[3164]: I1124 00:17:06.268627 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-a-980c694365" podStartSLOduration=1.2686084 podStartE2EDuration="1.2686084s" podCreationTimestamp="2025-11-24 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:06.190164093 +0000 UTC m=+1.163185507" watchObservedRunningTime="2025-11-24 00:17:06.2686084 +0000 UTC m=+1.241629811" Nov 24 00:17:06.268702 kubelet[3164]: I1124 00:17:06.268705 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-a-980c694365" podStartSLOduration=4.268701701 podStartE2EDuration="4.268701701s" podCreationTimestamp="2025-11-24 00:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:06.26855685 +0000 UTC m=+1.241578263" watchObservedRunningTime="2025-11-24 00:17:06.268701701 +0000 UTC m=+1.241723108" Nov 24 00:17:09.088514 kubelet[3164]: I1124 00:17:09.088423 3164 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:17:09.088988 containerd[1703]: time="2025-11-24T00:17:09.088734963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:17:09.089182 kubelet[3164]: I1124 00:17:09.089090 3164 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:17:10.045117 systemd[1]: Created slice kubepods-besteffort-podd396f1ac_b055_45ca_85ed_376233074470.slice - libcontainer container kubepods-besteffort-podd396f1ac_b055_45ca_85ed_376233074470.slice. Nov 24 00:17:10.138291 kubelet[3164]: I1124 00:17:10.138165 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d396f1ac-b055-45ca-85ed-376233074470-kube-proxy\") pod \"kube-proxy-hq6zb\" (UID: \"d396f1ac-b055-45ca-85ed-376233074470\") " pod="kube-system/kube-proxy-hq6zb" Nov 24 00:17:10.138291 kubelet[3164]: I1124 00:17:10.138259 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ctg\" (UniqueName: \"kubernetes.io/projected/d396f1ac-b055-45ca-85ed-376233074470-kube-api-access-k8ctg\") pod \"kube-proxy-hq6zb\" (UID: \"d396f1ac-b055-45ca-85ed-376233074470\") " pod="kube-system/kube-proxy-hq6zb" Nov 24 00:17:10.138291 kubelet[3164]: I1124 00:17:10.138302 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d396f1ac-b055-45ca-85ed-376233074470-xtables-lock\") pod \"kube-proxy-hq6zb\" (UID: \"d396f1ac-b055-45ca-85ed-376233074470\") " pod="kube-system/kube-proxy-hq6zb" Nov 24 00:17:10.138801 kubelet[3164]: I1124 00:17:10.138320 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d396f1ac-b055-45ca-85ed-376233074470-lib-modules\") pod \"kube-proxy-hq6zb\" (UID: \"d396f1ac-b055-45ca-85ed-376233074470\") " pod="kube-system/kube-proxy-hq6zb" Nov 24 00:17:10.285914 kubelet[3164]: E1124 00:17:10.285717 3164 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:17:10.285914 kubelet[3164]: E1124 00:17:10.285749 3164 projected.go:194] Error preparing data for projected volume kube-api-access-k8ctg for pod kube-system/kube-proxy-hq6zb: configmap "kube-root-ca.crt" not found Nov 24 00:17:10.285914 kubelet[3164]: E1124 00:17:10.285821 3164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d396f1ac-b055-45ca-85ed-376233074470-kube-api-access-k8ctg podName:d396f1ac-b055-45ca-85ed-376233074470 nodeName:}" failed. No retries permitted until 2025-11-24 00:17:10.785795054 +0000 UTC m=+5.758816457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8ctg" (UniqueName: "kubernetes.io/projected/d396f1ac-b055-45ca-85ed-376233074470-kube-api-access-k8ctg") pod "kube-proxy-hq6zb" (UID: "d396f1ac-b055-45ca-85ed-376233074470") : configmap "kube-root-ca.crt" not found Nov 24 00:17:10.595797 systemd[1]: Created slice kubepods-besteffort-pod4b5b9162_90c1_4824_897c_1a3f4e39f5b3.slice - libcontainer container kubepods-besteffort-pod4b5b9162_90c1_4824_897c_1a3f4e39f5b3.slice. Nov 24 00:17:10.642320 kubelet[3164]: I1124 00:17:10.642284 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b5b9162-90c1-4824-897c-1a3f4e39f5b3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fth5b\" (UID: \"4b5b9162-90c1-4824-897c-1a3f4e39f5b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-fth5b" Nov 24 00:17:10.642467 kubelet[3164]: I1124 00:17:10.642361 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjh4v\" (UniqueName: \"kubernetes.io/projected/4b5b9162-90c1-4824-897c-1a3f4e39f5b3-kube-api-access-vjh4v\") pod \"tigera-operator-7dcd859c48-fth5b\" (UID: \"4b5b9162-90c1-4824-897c-1a3f4e39f5b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-fth5b" Nov 24 00:17:10.902989 containerd[1703]: time="2025-11-24T00:17:10.901081145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fth5b,Uid:4b5b9162-90c1-4824-897c-1a3f4e39f5b3,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:17:10.941434 containerd[1703]: time="2025-11-24T00:17:10.941393522Z" level=info msg="connecting to shim a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75" address="unix:///run/containerd/s/bff086fbe5a3f9db8df9bf2990660ba67ee95d076fbdb593d11bf6fdfed64190" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:10.954077 containerd[1703]: time="2025-11-24T00:17:10.954038294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq6zb,Uid:d396f1ac-b055-45ca-85ed-376233074470,Namespace:kube-system,Attempt:0,}" Nov 24 00:17:10.966058 systemd[1]: Started cri-containerd-a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75.scope - libcontainer container a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75. Nov 24 00:17:11.000610 containerd[1703]: time="2025-11-24T00:17:11.000566891Z" level=info msg="connecting to shim 4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e" address="unix:///run/containerd/s/8f7ac3fa6b76af68c17e05c75fa3a810362e2e11f8042297eb1c28e2c7e85ace" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:11.026194 containerd[1703]: time="2025-11-24T00:17:11.026134290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fth5b,Uid:4b5b9162-90c1-4824-897c-1a3f4e39f5b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75\"" Nov 24 00:17:11.029100 containerd[1703]: time="2025-11-24T00:17:11.029038396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:17:11.030046 systemd[1]: Started cri-containerd-4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e.scope - libcontainer container 4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e. Nov 24 00:17:11.054500 containerd[1703]: time="2025-11-24T00:17:11.054470781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq6zb,Uid:d396f1ac-b055-45ca-85ed-376233074470,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e\"" Nov 24 00:17:11.067163 containerd[1703]: time="2025-11-24T00:17:11.067132791Z" level=info msg="CreateContainer within sandbox \"4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:17:11.089530 containerd[1703]: time="2025-11-24T00:17:11.089483217Z" level=info msg="Container 49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:11.106932 containerd[1703]: time="2025-11-24T00:17:11.106879074Z" level=info msg="CreateContainer within sandbox \"4f0ec9f739c8a23317c5666e4ab0b28e6cecb874cf90605c55a7c1a381ee236e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a\"" Nov 24 00:17:11.107399 containerd[1703]: time="2025-11-24T00:17:11.107378351Z" level=info msg="StartContainer for \"49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a\"" Nov 24 00:17:11.109339 containerd[1703]: time="2025-11-24T00:17:11.109306746Z" level=info msg="connecting to shim 49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a" address="unix:///run/containerd/s/8f7ac3fa6b76af68c17e05c75fa3a810362e2e11f8042297eb1c28e2c7e85ace" protocol=ttrpc version=3 Nov 24 00:17:11.127062 systemd[1]: Started cri-containerd-49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a.scope - libcontainer container 49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a. Nov 24 00:17:11.196355 containerd[1703]: time="2025-11-24T00:17:11.196250522Z" level=info msg="StartContainer for \"49bb5c4618e7d3325070b75774aeadac694bebd67ed048fbd7fbde61d8bdc93a\" returns successfully" Nov 24 00:17:12.542781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233938213.mount: Deactivated successfully. Nov 24 00:17:13.277123 containerd[1703]: time="2025-11-24T00:17:13.277077788Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:13.280427 containerd[1703]: time="2025-11-24T00:17:13.280285450Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:17:13.283441 containerd[1703]: time="2025-11-24T00:17:13.283400779Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:13.288523 containerd[1703]: time="2025-11-24T00:17:13.288471990Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:13.289119 containerd[1703]: time="2025-11-24T00:17:13.288964458Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.259898168s" Nov 24 00:17:13.289119 containerd[1703]: time="2025-11-24T00:17:13.288996480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:17:13.296721 containerd[1703]: time="2025-11-24T00:17:13.296681343Z" level=info msg="CreateContainer within sandbox \"a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:17:13.331837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627133721.mount: Deactivated successfully. Nov 24 00:17:13.333985 containerd[1703]: time="2025-11-24T00:17:13.332077837Z" level=info msg="Container e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:13.351325 containerd[1703]: time="2025-11-24T00:17:13.351290251Z" level=info msg="CreateContainer within sandbox \"a64a0a0ef1888b19517b79238ae96c4a428382b9fbe1f52b6d76a97e5ddebf75\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee\"" Nov 24 00:17:13.352938 containerd[1703]: time="2025-11-24T00:17:13.351932168Z" level=info msg="StartContainer for \"e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee\"" Nov 24 00:17:13.353083 containerd[1703]: time="2025-11-24T00:17:13.353035080Z" level=info msg="connecting to shim e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee" address="unix:///run/containerd/s/bff086fbe5a3f9db8df9bf2990660ba67ee95d076fbdb593d11bf6fdfed64190" protocol=ttrpc version=3 Nov 24 00:17:13.374068 systemd[1]: Started cri-containerd-e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee.scope - libcontainer container e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee. Nov 24 00:17:13.405546 containerd[1703]: time="2025-11-24T00:17:13.405493917Z" level=info msg="StartContainer for \"e88f7225cf70469cad60f6c924a2628c44dd0dd9c5d201b11799daec2b0c37ee\" returns successfully" Nov 24 00:17:14.180318 kubelet[3164]: I1124 00:17:14.180245 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hq6zb" podStartSLOduration=5.180223984 podStartE2EDuration="5.180223984s" podCreationTimestamp="2025-11-24 00:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:12.178872314 +0000 UTC m=+7.151893728" watchObservedRunningTime="2025-11-24 00:17:14.180223984 +0000 UTC m=+9.153245395" Nov 24 00:17:15.670072 kubelet[3164]: I1124 00:17:15.669952 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fth5b" podStartSLOduration=3.407478317 podStartE2EDuration="5.669931929s" podCreationTimestamp="2025-11-24 00:17:10 +0000 UTC" firstStartedPulling="2025-11-24 00:17:11.027375733 +0000 UTC m=+6.000397137" lastFinishedPulling="2025-11-24 00:17:13.289829335 +0000 UTC m=+8.262850749" observedRunningTime="2025-11-24 00:17:14.18090989 +0000 UTC m=+9.153931297" watchObservedRunningTime="2025-11-24 00:17:15.669931929 +0000 UTC m=+10.642953345" Nov 24 00:17:20.796322 sudo[2130]: pam_unix(sudo:session): session closed for user root Nov 24 00:17:20.901384 sshd[2129]: Connection closed by 10.200.16.10 port 38460 Nov 24 00:17:20.902048 sshd-session[2126]: pam_unix(sshd:session): session closed for user core Nov 24 00:17:20.905566 systemd[1]: sshd@6-10.200.4.36:22-10.200.16.10:38460.service: Deactivated successfully. Nov 24 00:17:20.910428 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:17:20.910785 systemd[1]: session-9.scope: Consumed 4.646s CPU time, 226.3M memory peak. Nov 24 00:17:20.913459 systemd-logind[1682]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:17:20.916573 systemd-logind[1682]: Removed session 9. Nov 24 00:17:27.694445 systemd[1]: Created slice kubepods-besteffort-pod54c179db_c35e_4b3e_b7a2_80b238a36593.slice - libcontainer container kubepods-besteffort-pod54c179db_c35e_4b3e_b7a2_80b238a36593.slice. Nov 24 00:17:27.752123 kubelet[3164]: I1124 00:17:27.751987 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6sp\" (UniqueName: \"kubernetes.io/projected/54c179db-c35e-4b3e-b7a2-80b238a36593-kube-api-access-pr6sp\") pod \"calico-typha-74b749cc75-xf56k\" (UID: \"54c179db-c35e-4b3e-b7a2-80b238a36593\") " pod="calico-system/calico-typha-74b749cc75-xf56k" Nov 24 00:17:27.752123 kubelet[3164]: I1124 00:17:27.752034 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54c179db-c35e-4b3e-b7a2-80b238a36593-typha-certs\") pod \"calico-typha-74b749cc75-xf56k\" (UID: \"54c179db-c35e-4b3e-b7a2-80b238a36593\") " pod="calico-system/calico-typha-74b749cc75-xf56k" Nov 24 00:17:27.752123 kubelet[3164]: I1124 00:17:27.752053 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c179db-c35e-4b3e-b7a2-80b238a36593-tigera-ca-bundle\") pod \"calico-typha-74b749cc75-xf56k\" (UID: \"54c179db-c35e-4b3e-b7a2-80b238a36593\") " pod="calico-system/calico-typha-74b749cc75-xf56k" Nov 24 00:17:27.999045 containerd[1703]: time="2025-11-24T00:17:27.998999306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b749cc75-xf56k,Uid:54c179db-c35e-4b3e-b7a2-80b238a36593,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:28.049926 systemd[1]: Created slice kubepods-besteffort-pod36120cc3_2058_4179_ab49_1500cc42b0af.slice - libcontainer container kubepods-besteffort-pod36120cc3_2058_4179_ab49_1500cc42b0af.slice. Nov 24 00:17:28.054161 kubelet[3164]: I1124 00:17:28.054131 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36120cc3-2058-4179-ab49-1500cc42b0af-node-certs\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054572 kubelet[3164]: I1124 00:17:28.054310 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-var-lib-calico\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054572 kubelet[3164]: I1124 00:17:28.054341 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-var-run-calico\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054572 kubelet[3164]: I1124 00:17:28.054370 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-cni-bin-dir\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054572 kubelet[3164]: I1124 00:17:28.054396 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-cni-net-dir\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054572 kubelet[3164]: I1124 00:17:28.054422 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbn6\" (UniqueName: \"kubernetes.io/projected/36120cc3-2058-4179-ab49-1500cc42b0af-kube-api-access-sdbn6\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054747 kubelet[3164]: I1124 00:17:28.054446 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-policysync\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054747 kubelet[3164]: I1124 00:17:28.054469 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-cni-log-dir\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054747 kubelet[3164]: I1124 00:17:28.054500 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-lib-modules\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054747 kubelet[3164]: I1124 00:17:28.054519 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-xtables-lock\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.054747 kubelet[3164]: I1124 00:17:28.054545 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36120cc3-2058-4179-ab49-1500cc42b0af-flexvol-driver-host\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.055348 kubelet[3164]: I1124 00:17:28.055201 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36120cc3-2058-4179-ab49-1500cc42b0af-tigera-ca-bundle\") pod \"calico-node-qsgps\" (UID: \"36120cc3-2058-4179-ab49-1500cc42b0af\") " pod="calico-system/calico-node-qsgps" Nov 24 00:17:28.055583 containerd[1703]: time="2025-11-24T00:17:28.055550963Z" level=info msg="connecting to shim 5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b" address="unix:///run/containerd/s/17de2a41bd927a7357aa1f707bb3309d4f79ba6a05db4803154bcf4f43c05caa" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:28.079170 systemd[1]: Started cri-containerd-5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b.scope - libcontainer container 5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b. Nov 24 00:17:28.125601 containerd[1703]: time="2025-11-24T00:17:28.125423145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b749cc75-xf56k,Uid:54c179db-c35e-4b3e-b7a2-80b238a36593,Namespace:calico-system,Attempt:0,} returns sandbox id \"5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b\"" Nov 24 00:17:28.127097 containerd[1703]: time="2025-11-24T00:17:28.127058659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:17:28.156708 kubelet[3164]: E1124 00:17:28.156679 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.156708 kubelet[3164]: W1124 00:17:28.156702 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.157027 kubelet[3164]: E1124 00:17:28.156724 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.157201 kubelet[3164]: E1124 00:17:28.157109 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.157201 kubelet[3164]: W1124 00:17:28.157122 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.157201 kubelet[3164]: E1124 00:17:28.157137 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.157359 kubelet[3164]: E1124 00:17:28.157353 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.157402 kubelet[3164]: W1124 00:17:28.157395 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.157446 kubelet[3164]: E1124 00:17:28.157438 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.157665 kubelet[3164]: E1124 00:17:28.157570 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.157665 kubelet[3164]: W1124 00:17:28.157576 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.157665 kubelet[3164]: E1124 00:17:28.157584 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.157788 kubelet[3164]: E1124 00:17:28.157781 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.157825 kubelet[3164]: W1124 00:17:28.157820 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.157868 kubelet[3164]: E1124 00:17:28.157860 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.158062 kubelet[3164]: E1124 00:17:28.158056 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.158109 kubelet[3164]: W1124 00:17:28.158104 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.158152 kubelet[3164]: E1124 00:17:28.158144 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.158349 kubelet[3164]: E1124 00:17:28.158319 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.158349 kubelet[3164]: W1124 00:17:28.158326 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.158349 kubelet[3164]: E1124 00:17:28.158336 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.158591 kubelet[3164]: E1124 00:17:28.158569 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.158591 kubelet[3164]: W1124 00:17:28.158576 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.158591 kubelet[3164]: E1124 00:17:28.158583 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.158833 kubelet[3164]: E1124 00:17:28.158811 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.158833 kubelet[3164]: W1124 00:17:28.158818 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.158833 kubelet[3164]: E1124 00:17:28.158825 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.160273 kubelet[3164]: E1124 00:17:28.160193 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.160273 kubelet[3164]: W1124 00:17:28.160213 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.160273 kubelet[3164]: E1124 00:17:28.160229 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.160615 kubelet[3164]: E1124 00:17:28.160584 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.160615 kubelet[3164]: W1124 00:17:28.160593 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.160615 kubelet[3164]: E1124 00:17:28.160603 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.160939 kubelet[3164]: E1124 00:17:28.160844 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.160939 kubelet[3164]: W1124 00:17:28.160851 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.160939 kubelet[3164]: E1124 00:17:28.160859 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.161240 kubelet[3164]: E1124 00:17:28.161210 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.161240 kubelet[3164]: W1124 00:17:28.161220 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.161240 kubelet[3164]: E1124 00:17:28.161229 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.161497 kubelet[3164]: E1124 00:17:28.161472 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.161497 kubelet[3164]: W1124 00:17:28.161479 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.161497 kubelet[3164]: E1124 00:17:28.161489 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.161730 kubelet[3164]: E1124 00:17:28.161707 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.161730 kubelet[3164]: W1124 00:17:28.161714 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.161730 kubelet[3164]: E1124 00:17:28.161722 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.161971 kubelet[3164]: E1124 00:17:28.161947 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.161971 kubelet[3164]: W1124 00:17:28.161955 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.161971 kubelet[3164]: E1124 00:17:28.161962 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.162186 kubelet[3164]: E1124 00:17:28.162161 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.162186 kubelet[3164]: W1124 00:17:28.162168 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.162186 kubelet[3164]: E1124 00:17:28.162177 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.162417 kubelet[3164]: E1124 00:17:28.162410 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.162465 kubelet[3164]: W1124 00:17:28.162449 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.162465 kubelet[3164]: E1124 00:17:28.162457 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.162651 kubelet[3164]: E1124 00:17:28.162628 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.162651 kubelet[3164]: W1124 00:17:28.162636 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.162651 kubelet[3164]: E1124 00:17:28.162643 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.162863 kubelet[3164]: E1124 00:17:28.162841 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.162863 kubelet[3164]: W1124 00:17:28.162848 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.162863 kubelet[3164]: E1124 00:17:28.162855 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.163109 kubelet[3164]: E1124 00:17:28.163086 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.163109 kubelet[3164]: W1124 00:17:28.163094 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.163109 kubelet[3164]: E1124 00:17:28.163101 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.163303 kubelet[3164]: E1124 00:17:28.163282 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.163303 kubelet[3164]: W1124 00:17:28.163288 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.163303 kubelet[3164]: E1124 00:17:28.163295 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.163545 kubelet[3164]: E1124 00:17:28.163520 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.163545 kubelet[3164]: W1124 00:17:28.163528 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.163545 kubelet[3164]: E1124 00:17:28.163536 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.163775 kubelet[3164]: E1124 00:17:28.163751 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.163775 kubelet[3164]: W1124 00:17:28.163759 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.163775 kubelet[3164]: E1124 00:17:28.163766 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.164029 kubelet[3164]: E1124 00:17:28.164006 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.164029 kubelet[3164]: W1124 00:17:28.164014 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.164029 kubelet[3164]: E1124 00:17:28.164021 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.164538 kubelet[3164]: E1124 00:17:28.164454 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.164538 kubelet[3164]: W1124 00:17:28.164464 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.164538 kubelet[3164]: E1124 00:17:28.164473 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.164679 kubelet[3164]: E1124 00:17:28.164673 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.164760 kubelet[3164]: W1124 00:17:28.164710 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.164760 kubelet[3164]: E1124 00:17:28.164718 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.165061 kubelet[3164]: E1124 00:17:28.164980 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.165061 kubelet[3164]: W1124 00:17:28.164988 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.165061 kubelet[3164]: E1124 00:17:28.164996 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.166161 kubelet[3164]: E1124 00:17:28.166149 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.166253 kubelet[3164]: W1124 00:17:28.166227 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.166253 kubelet[3164]: E1124 00:17:28.166241 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.167171 kubelet[3164]: E1124 00:17:28.167095 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.167171 kubelet[3164]: W1124 00:17:28.167108 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.167171 kubelet[3164]: E1124 00:17:28.167121 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.168212 kubelet[3164]: E1124 00:17:28.168197 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.168212 kubelet[3164]: W1124 00:17:28.168209 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.168335 kubelet[3164]: E1124 00:17:28.168222 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.168390 kubelet[3164]: E1124 00:17:28.168375 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.168456 kubelet[3164]: W1124 00:17:28.168388 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.168456 kubelet[3164]: E1124 00:17:28.168397 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.168585 kubelet[3164]: E1124 00:17:28.168514 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.168585 kubelet[3164]: W1124 00:17:28.168520 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.168585 kubelet[3164]: E1124 00:17:28.168528 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.168699 kubelet[3164]: E1124 00:17:28.168648 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.168699 kubelet[3164]: W1124 00:17:28.168653 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.168699 kubelet[3164]: E1124 00:17:28.168660 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.238733 kubelet[3164]: E1124 00:17:28.238707 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.238733 kubelet[3164]: W1124 00:17:28.238725 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.238864 kubelet[3164]: E1124 00:17:28.238742 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.353641 containerd[1703]: time="2025-11-24T00:17:28.353516912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsgps,Uid:36120cc3-2058-4179-ab49-1500cc42b0af,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:28.404627 containerd[1703]: time="2025-11-24T00:17:28.404105196Z" level=info msg="connecting to shim dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d" address="unix:///run/containerd/s/b6b3c633a666f109cd7f0d960c7d4ac9a741ca831aa3c8fba7daa5a3dd173d3b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:28.427291 systemd[1]: Started cri-containerd-dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d.scope - libcontainer container dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d. Nov 24 00:17:28.485616 containerd[1703]: time="2025-11-24T00:17:28.485580799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsgps,Uid:36120cc3-2058-4179-ab49-1500cc42b0af,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\"" Nov 24 00:17:28.551287 kubelet[3164]: E1124 00:17:28.550972 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:28.646302 kubelet[3164]: E1124 00:17:28.646175 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.646302 kubelet[3164]: W1124 00:17:28.646232 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.646302 kubelet[3164]: E1124 00:17:28.646252 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646385 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.646789 kubelet[3164]: W1124 00:17:28.646391 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646399 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646496 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.646789 kubelet[3164]: W1124 00:17:28.646502 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646510 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646648 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.646789 kubelet[3164]: W1124 00:17:28.646654 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646678 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.646789 kubelet[3164]: E1124 00:17:28.646788 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647070 kubelet[3164]: W1124 00:17:28.646793 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647070 kubelet[3164]: E1124 00:17:28.646800 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647070 kubelet[3164]: E1124 00:17:28.646879 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647070 kubelet[3164]: W1124 00:17:28.646883 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647070 kubelet[3164]: E1124 00:17:28.646889 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647070 kubelet[3164]: E1124 00:17:28.646986 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647070 kubelet[3164]: W1124 00:17:28.646991 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647070 kubelet[3164]: E1124 00:17:28.646997 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647076 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647269 kubelet[3164]: W1124 00:17:28.647080 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647087 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647172 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647269 kubelet[3164]: W1124 00:17:28.647176 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647182 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647255 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647269 kubelet[3164]: W1124 00:17:28.647259 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647269 kubelet[3164]: E1124 00:17:28.647264 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647496 kubelet[3164]: E1124 00:17:28.647333 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647496 kubelet[3164]: W1124 00:17:28.647338 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647496 kubelet[3164]: E1124 00:17:28.647343 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647496 kubelet[3164]: E1124 00:17:28.647415 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647496 kubelet[3164]: W1124 00:17:28.647419 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647496 kubelet[3164]: E1124 00:17:28.647424 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647496 kubelet[3164]: E1124 00:17:28.647498 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647673 kubelet[3164]: W1124 00:17:28.647502 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647673 kubelet[3164]: E1124 00:17:28.647509 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647673 kubelet[3164]: E1124 00:17:28.647580 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647673 kubelet[3164]: W1124 00:17:28.647585 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647673 kubelet[3164]: E1124 00:17:28.647591 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647673 kubelet[3164]: E1124 00:17:28.647663 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647673 kubelet[3164]: W1124 00:17:28.647667 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647673 kubelet[3164]: E1124 00:17:28.647673 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647872 kubelet[3164]: E1124 00:17:28.647747 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647872 kubelet[3164]: W1124 00:17:28.647751 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647872 kubelet[3164]: E1124 00:17:28.647757 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.647872 kubelet[3164]: E1124 00:17:28.647838 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.647872 kubelet[3164]: W1124 00:17:28.647842 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.647872 kubelet[3164]: E1124 00:17:28.647847 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.647947 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.648391 kubelet[3164]: W1124 00:17:28.647952 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.647958 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.648029 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.648391 kubelet[3164]: W1124 00:17:28.648032 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.648037 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.648136 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.648391 kubelet[3164]: W1124 00:17:28.648144 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.648391 kubelet[3164]: E1124 00:17:28.648151 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.659455 kubelet[3164]: E1124 00:17:28.659431 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.659455 kubelet[3164]: W1124 00:17:28.659447 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.659648 kubelet[3164]: E1124 00:17:28.659462 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.659648 kubelet[3164]: I1124 00:17:28.659490 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84288287-c520-476c-9981-2956ccc0c1dc-kubelet-dir\") pod \"csi-node-driver-jtqbh\" (UID: \"84288287-c520-476c-9981-2956ccc0c1dc\") " pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:28.659648 kubelet[3164]: E1124 00:17:28.659619 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.659648 kubelet[3164]: W1124 00:17:28.659627 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.659648 kubelet[3164]: E1124 00:17:28.659635 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.659829 kubelet[3164]: I1124 00:17:28.659747 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84288287-c520-476c-9981-2956ccc0c1dc-socket-dir\") pod \"csi-node-driver-jtqbh\" (UID: \"84288287-c520-476c-9981-2956ccc0c1dc\") " pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:28.659829 kubelet[3164]: E1124 00:17:28.659796 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.659829 kubelet[3164]: W1124 00:17:28.659802 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.659829 kubelet[3164]: E1124 00:17:28.659809 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660002 kubelet[3164]: E1124 00:17:28.659972 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660002 kubelet[3164]: W1124 00:17:28.659978 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660002 kubelet[3164]: E1124 00:17:28.659985 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660115 kubelet[3164]: E1124 00:17:28.660100 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660115 kubelet[3164]: W1124 00:17:28.660108 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660115 kubelet[3164]: E1124 00:17:28.660114 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660234 kubelet[3164]: I1124 00:17:28.660129 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpxh\" (UniqueName: \"kubernetes.io/projected/84288287-c520-476c-9981-2956ccc0c1dc-kube-api-access-srpxh\") pod \"csi-node-driver-jtqbh\" (UID: \"84288287-c520-476c-9981-2956ccc0c1dc\") " pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:28.660234 kubelet[3164]: E1124 00:17:28.660229 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660305 kubelet[3164]: W1124 00:17:28.660235 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660305 kubelet[3164]: E1124 00:17:28.660242 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660378 kubelet[3164]: E1124 00:17:28.660358 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660378 kubelet[3164]: W1124 00:17:28.660364 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660378 kubelet[3164]: E1124 00:17:28.660370 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660473 kubelet[3164]: I1124 00:17:28.660259 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84288287-c520-476c-9981-2956ccc0c1dc-registration-dir\") pod \"csi-node-driver-jtqbh\" (UID: \"84288287-c520-476c-9981-2956ccc0c1dc\") " pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:28.660473 kubelet[3164]: E1124 00:17:28.660463 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660473 kubelet[3164]: W1124 00:17:28.660467 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660562 kubelet[3164]: E1124 00:17:28.660473 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660624 kubelet[3164]: E1124 00:17:28.660612 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660624 kubelet[3164]: W1124 00:17:28.660619 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660689 kubelet[3164]: E1124 00:17:28.660626 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660746 kubelet[3164]: E1124 00:17:28.660739 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660746 kubelet[3164]: W1124 00:17:28.660746 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660815 kubelet[3164]: E1124 00:17:28.660753 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.660885 kubelet[3164]: E1124 00:17:28.660861 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.660885 kubelet[3164]: W1124 00:17:28.660878 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.660885 kubelet[3164]: E1124 00:17:28.660884 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.661034 kubelet[3164]: E1124 00:17:28.661022 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.661034 kubelet[3164]: W1124 00:17:28.661031 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.661104 kubelet[3164]: E1124 00:17:28.661039 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.661104 kubelet[3164]: I1124 00:17:28.661063 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/84288287-c520-476c-9981-2956ccc0c1dc-varrun\") pod \"csi-node-driver-jtqbh\" (UID: \"84288287-c520-476c-9981-2956ccc0c1dc\") " pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:28.661191 kubelet[3164]: E1124 00:17:28.661174 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.661191 kubelet[3164]: W1124 00:17:28.661185 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.661263 kubelet[3164]: E1124 00:17:28.661191 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.661361 kubelet[3164]: E1124 00:17:28.661342 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.661361 kubelet[3164]: W1124 00:17:28.661356 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.661421 kubelet[3164]: E1124 00:17:28.661363 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.661456 kubelet[3164]: E1124 00:17:28.661453 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.661483 kubelet[3164]: W1124 00:17:28.661458 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.661483 kubelet[3164]: E1124 00:17:28.661464 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.762170 kubelet[3164]: E1124 00:17:28.762140 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.762170 kubelet[3164]: W1124 00:17:28.762160 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.762170 kubelet[3164]: E1124 00:17:28.762178 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762326 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.762711 kubelet[3164]: W1124 00:17:28.762333 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762341 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762510 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.762711 kubelet[3164]: W1124 00:17:28.762518 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762527 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762660 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.762711 kubelet[3164]: W1124 00:17:28.762666 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.762711 kubelet[3164]: E1124 00:17:28.762674 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.762976 kubelet[3164]: E1124 00:17:28.762779 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.762976 kubelet[3164]: W1124 00:17:28.762784 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.762976 kubelet[3164]: E1124 00:17:28.762791 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763056 kubelet[3164]: E1124 00:17:28.762997 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763056 kubelet[3164]: W1124 00:17:28.763005 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763056 kubelet[3164]: E1124 00:17:28.763013 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763161 kubelet[3164]: E1124 00:17:28.763152 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763161 kubelet[3164]: W1124 00:17:28.763159 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763218 kubelet[3164]: E1124 00:17:28.763165 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763309 kubelet[3164]: E1124 00:17:28.763294 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763355 kubelet[3164]: W1124 00:17:28.763304 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763355 kubelet[3164]: E1124 00:17:28.763320 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763459 kubelet[3164]: E1124 00:17:28.763443 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763459 kubelet[3164]: W1124 00:17:28.763456 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763509 kubelet[3164]: E1124 00:17:28.763463 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763634 kubelet[3164]: E1124 00:17:28.763611 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763634 kubelet[3164]: W1124 00:17:28.763631 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763707 kubelet[3164]: E1124 00:17:28.763639 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763805 kubelet[3164]: E1124 00:17:28.763789 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763805 kubelet[3164]: W1124 00:17:28.763802 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.763866 kubelet[3164]: E1124 00:17:28.763811 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.763964 kubelet[3164]: E1124 00:17:28.763953 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.763964 kubelet[3164]: W1124 00:17:28.763962 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764042 kubelet[3164]: E1124 00:17:28.763969 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.764101 kubelet[3164]: E1124 00:17:28.764092 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.764137 kubelet[3164]: W1124 00:17:28.764100 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764137 kubelet[3164]: E1124 00:17:28.764107 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.764220 kubelet[3164]: E1124 00:17:28.764197 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.764220 kubelet[3164]: W1124 00:17:28.764202 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764220 kubelet[3164]: E1124 00:17:28.764208 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.764346 kubelet[3164]: E1124 00:17:28.764321 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.764346 kubelet[3164]: W1124 00:17:28.764326 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764346 kubelet[3164]: E1124 00:17:28.764332 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.764534 kubelet[3164]: E1124 00:17:28.764526 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.764664 kubelet[3164]: W1124 00:17:28.764584 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764664 kubelet[3164]: E1124 00:17:28.764595 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.764917 kubelet[3164]: E1124 00:17:28.764842 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.764917 kubelet[3164]: W1124 00:17:28.764852 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.764917 kubelet[3164]: E1124 00:17:28.764861 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.765169 kubelet[3164]: E1124 00:17:28.765118 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.765169 kubelet[3164]: W1124 00:17:28.765126 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.765169 kubelet[3164]: E1124 00:17:28.765135 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.765401 kubelet[3164]: E1124 00:17:28.765341 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.765401 kubelet[3164]: W1124 00:17:28.765348 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.765401 kubelet[3164]: E1124 00:17:28.765356 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.765578 kubelet[3164]: E1124 00:17:28.765525 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.765578 kubelet[3164]: W1124 00:17:28.765532 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.765578 kubelet[3164]: E1124 00:17:28.765539 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.765809 kubelet[3164]: E1124 00:17:28.765785 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.765809 kubelet[3164]: W1124 00:17:28.765793 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.765930 kubelet[3164]: E1124 00:17:28.765874 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.766186 kubelet[3164]: E1124 00:17:28.766134 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.766186 kubelet[3164]: W1124 00:17:28.766147 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.766186 kubelet[3164]: E1124 00:17:28.766157 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.766452 kubelet[3164]: E1124 00:17:28.766439 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.766452 kubelet[3164]: W1124 00:17:28.766451 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.766514 kubelet[3164]: E1124 00:17:28.766461 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.766605 kubelet[3164]: E1124 00:17:28.766593 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.766605 kubelet[3164]: W1124 00:17:28.766601 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.766663 kubelet[3164]: E1124 00:17:28.766608 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.766856 kubelet[3164]: E1124 00:17:28.766835 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.766856 kubelet[3164]: W1124 00:17:28.766853 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.766969 kubelet[3164]: E1124 00:17:28.766860 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:28.799548 kubelet[3164]: E1124 00:17:28.799479 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:28.799548 kubelet[3164]: W1124 00:17:28.799499 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:28.799548 kubelet[3164]: E1124 00:17:28.799514 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:29.520433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846696980.mount: Deactivated successfully. Nov 24 00:17:29.997877 containerd[1703]: time="2025-11-24T00:17:29.997835209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:30.000600 containerd[1703]: time="2025-11-24T00:17:30.000456479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:17:30.003513 containerd[1703]: time="2025-11-24T00:17:30.003482359Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:30.007410 containerd[1703]: time="2025-11-24T00:17:30.007381479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:30.007916 containerd[1703]: time="2025-11-24T00:17:30.007720211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.880628264s" Nov 24 00:17:30.007916 containerd[1703]: time="2025-11-24T00:17:30.007749787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:17:30.008695 containerd[1703]: time="2025-11-24T00:17:30.008672280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:17:30.029209 containerd[1703]: time="2025-11-24T00:17:30.029160611Z" level=info msg="CreateContainer within sandbox \"5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:17:30.052521 containerd[1703]: time="2025-11-24T00:17:30.052482522Z" level=info msg="Container 8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:30.079190 containerd[1703]: time="2025-11-24T00:17:30.079156147Z" level=info msg="CreateContainer within sandbox \"5049d50e290f2415a6ec78d825c38aaaf199001a1c763ac4da869d98fec30c8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094\"" Nov 24 00:17:30.079912 containerd[1703]: time="2025-11-24T00:17:30.079868563Z" level=info msg="StartContainer for \"8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094\"" Nov 24 00:17:30.081268 containerd[1703]: time="2025-11-24T00:17:30.081239765Z" level=info msg="connecting to shim 8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094" address="unix:///run/containerd/s/17de2a41bd927a7357aa1f707bb3309d4f79ba6a05db4803154bcf4f43c05caa" protocol=ttrpc version=3 Nov 24 00:17:30.108046 systemd[1]: Started cri-containerd-8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094.scope - libcontainer container 8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094. Nov 24 00:17:30.112356 kubelet[3164]: E1124 00:17:30.112303 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:30.164134 containerd[1703]: time="2025-11-24T00:17:30.164029750Z" level=info msg="StartContainer for \"8dc5f0e6ecc46f5b7bad3dce2ad761a1dcfae3639fc5f2378cce87ca6d1c9094\" returns successfully" Nov 24 00:17:30.238758 kubelet[3164]: I1124 00:17:30.238691 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74b749cc75-xf56k" podStartSLOduration=1.356901928 podStartE2EDuration="3.238674351s" podCreationTimestamp="2025-11-24 00:17:27 +0000 UTC" firstStartedPulling="2025-11-24 00:17:28.126705939 +0000 UTC m=+23.099727343" lastFinishedPulling="2025-11-24 00:17:30.008478358 +0000 UTC m=+24.981499766" observedRunningTime="2025-11-24 00:17:30.238404813 +0000 UTC m=+25.211426227" watchObservedRunningTime="2025-11-24 00:17:30.238674351 +0000 UTC m=+25.211695787" Nov 24 00:17:30.257421 kubelet[3164]: E1124 00:17:30.256855 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.257421 kubelet[3164]: W1124 00:17:30.256938 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.257421 kubelet[3164]: E1124 00:17:30.256962 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.258631 kubelet[3164]: E1124 00:17:30.258567 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.258631 kubelet[3164]: W1124 00:17:30.258605 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.258846 kubelet[3164]: E1124 00:17:30.258772 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.259001 kubelet[3164]: E1124 00:17:30.258988 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.259660 kubelet[3164]: W1124 00:17:30.259046 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.259660 kubelet[3164]: E1124 00:17:30.259059 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.260031 kubelet[3164]: E1124 00:17:30.260016 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.260180 kubelet[3164]: W1124 00:17:30.260083 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.260180 kubelet[3164]: E1124 00:17:30.260098 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.260356 kubelet[3164]: E1124 00:17:30.260349 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.260444 kubelet[3164]: W1124 00:17:30.260404 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.260444 kubelet[3164]: E1124 00:17:30.260416 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.260966 kubelet[3164]: E1124 00:17:30.260721 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.260966 kubelet[3164]: W1124 00:17:30.260733 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.261129 kubelet[3164]: E1124 00:17:30.261074 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.261528 kubelet[3164]: E1124 00:17:30.261308 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.261528 kubelet[3164]: W1124 00:17:30.261338 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.261528 kubelet[3164]: E1124 00:17:30.261351 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.261745 kubelet[3164]: E1124 00:17:30.261717 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.261989 kubelet[3164]: W1124 00:17:30.261830 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.261989 kubelet[3164]: E1124 00:17:30.261847 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.262998 kubelet[3164]: E1124 00:17:30.262951 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.262998 kubelet[3164]: W1124 00:17:30.262965 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.262998 kubelet[3164]: E1124 00:17:30.262980 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.263341 kubelet[3164]: E1124 00:17:30.263293 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.263341 kubelet[3164]: W1124 00:17:30.263306 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.263341 kubelet[3164]: E1124 00:17:30.263319 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.263635 kubelet[3164]: E1124 00:17:30.263557 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.263770 kubelet[3164]: W1124 00:17:30.263679 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.263770 kubelet[3164]: E1124 00:17:30.263691 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.264857 kubelet[3164]: E1124 00:17:30.264842 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.264985 kubelet[3164]: W1124 00:17:30.264960 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.264985 kubelet[3164]: E1124 00:17:30.264982 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.265998 kubelet[3164]: E1124 00:17:30.265974 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.265998 kubelet[3164]: W1124 00:17:30.265995 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.266252 kubelet[3164]: E1124 00:17:30.266008 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.266290 kubelet[3164]: E1124 00:17:30.266255 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.266290 kubelet[3164]: W1124 00:17:30.266264 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.266290 kubelet[3164]: E1124 00:17:30.266275 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.266420 kubelet[3164]: E1124 00:17:30.266412 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.266445 kubelet[3164]: W1124 00:17:30.266422 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.266445 kubelet[3164]: E1124 00:17:30.266430 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.273765 kubelet[3164]: E1124 00:17:30.273751 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.273928 kubelet[3164]: W1124 00:17:30.273849 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.273928 kubelet[3164]: E1124 00:17:30.273864 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.275024 kubelet[3164]: E1124 00:17:30.274980 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.275024 kubelet[3164]: W1124 00:17:30.274996 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.275024 kubelet[3164]: E1124 00:17:30.275011 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.275370 kubelet[3164]: E1124 00:17:30.275336 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.275370 kubelet[3164]: W1124 00:17:30.275347 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.275370 kubelet[3164]: E1124 00:17:30.275358 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.275747 kubelet[3164]: E1124 00:17:30.275680 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.275747 kubelet[3164]: W1124 00:17:30.275724 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.275747 kubelet[3164]: E1124 00:17:30.275735 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.276489 kubelet[3164]: E1124 00:17:30.276472 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.276489 kubelet[3164]: W1124 00:17:30.276488 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.276585 kubelet[3164]: E1124 00:17:30.276502 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.277209 kubelet[3164]: E1124 00:17:30.277186 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.277209 kubelet[3164]: W1124 00:17:30.277205 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.277302 kubelet[3164]: E1124 00:17:30.277216 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.278005 kubelet[3164]: E1124 00:17:30.277984 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.278005 kubelet[3164]: W1124 00:17:30.278004 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.278095 kubelet[3164]: E1124 00:17:30.278016 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.278239 kubelet[3164]: E1124 00:17:30.278225 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.278272 kubelet[3164]: W1124 00:17:30.278240 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.278272 kubelet[3164]: E1124 00:17:30.278251 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.278433 kubelet[3164]: E1124 00:17:30.278420 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.278467 kubelet[3164]: W1124 00:17:30.278434 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.278467 kubelet[3164]: E1124 00:17:30.278444 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.278585 kubelet[3164]: E1124 00:17:30.278576 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.278613 kubelet[3164]: W1124 00:17:30.278586 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.278613 kubelet[3164]: E1124 00:17:30.278594 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.278752 kubelet[3164]: E1124 00:17:30.278742 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.278782 kubelet[3164]: W1124 00:17:30.278752 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.278782 kubelet[3164]: E1124 00:17:30.278761 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.279197 kubelet[3164]: E1124 00:17:30.279134 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.279197 kubelet[3164]: W1124 00:17:30.279146 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.279197 kubelet[3164]: E1124 00:17:30.279158 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.279475 kubelet[3164]: E1124 00:17:30.279337 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.279475 kubelet[3164]: W1124 00:17:30.279348 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.279475 kubelet[3164]: E1124 00:17:30.279357 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.281007 kubelet[3164]: E1124 00:17:30.280984 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.281007 kubelet[3164]: W1124 00:17:30.281005 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.281113 kubelet[3164]: E1124 00:17:30.281020 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.281212 kubelet[3164]: E1124 00:17:30.281201 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.281241 kubelet[3164]: W1124 00:17:30.281212 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.281241 kubelet[3164]: E1124 00:17:30.281222 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.281402 kubelet[3164]: E1124 00:17:30.281391 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.281434 kubelet[3164]: W1124 00:17:30.281402 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.281434 kubelet[3164]: E1124 00:17:30.281411 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.281597 kubelet[3164]: E1124 00:17:30.281586 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.281597 kubelet[3164]: W1124 00:17:30.281597 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.281651 kubelet[3164]: E1124 00:17:30.281606 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:30.282255 kubelet[3164]: E1124 00:17:30.281920 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:30.282255 kubelet[3164]: W1124 00:17:30.281931 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:30.282255 kubelet[3164]: E1124 00:17:30.281941 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.150949 containerd[1703]: time="2025-11-24T00:17:31.150887432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:31.154911 containerd[1703]: time="2025-11-24T00:17:31.154869474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:17:31.158459 containerd[1703]: time="2025-11-24T00:17:31.158416163Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:31.164984 containerd[1703]: time="2025-11-24T00:17:31.164394920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:31.164984 containerd[1703]: time="2025-11-24T00:17:31.164855129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.156069042s" Nov 24 00:17:31.164984 containerd[1703]: time="2025-11-24T00:17:31.164883416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:17:31.172790 containerd[1703]: time="2025-11-24T00:17:31.172758302Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:17:31.192507 containerd[1703]: time="2025-11-24T00:17:31.190988911Z" level=info msg="Container c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:31.210244 containerd[1703]: time="2025-11-24T00:17:31.210214007Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36\"" Nov 24 00:17:31.211794 containerd[1703]: time="2025-11-24T00:17:31.210736206Z" level=info msg="StartContainer for \"c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36\"" Nov 24 00:17:31.212268 containerd[1703]: time="2025-11-24T00:17:31.212242312Z" level=info msg="connecting to shim c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36" address="unix:///run/containerd/s/b6b3c633a666f109cd7f0d960c7d4ac9a741ca831aa3c8fba7daa5a3dd173d3b" protocol=ttrpc version=3 Nov 24 00:17:31.225345 kubelet[3164]: I1124 00:17:31.225316 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:17:31.234086 systemd[1]: Started cri-containerd-c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36.scope - libcontainer container c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36. Nov 24 00:17:31.271269 kubelet[3164]: E1124 00:17:31.271219 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.271501 kubelet[3164]: W1124 00:17:31.271333 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.271501 kubelet[3164]: E1124 00:17:31.271358 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.271770 kubelet[3164]: E1124 00:17:31.271711 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.271770 kubelet[3164]: W1124 00:17:31.271733 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.271770 kubelet[3164]: E1124 00:17:31.271743 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.271987 kubelet[3164]: E1124 00:17:31.271968 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.271987 kubelet[3164]: W1124 00:17:31.271975 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.272121 kubelet[3164]: E1124 00:17:31.272064 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.272239 kubelet[3164]: E1124 00:17:31.272220 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.272239 kubelet[3164]: W1124 00:17:31.272226 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.272336 kubelet[3164]: E1124 00:17:31.272299 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.272485 kubelet[3164]: E1124 00:17:31.272478 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.272545 kubelet[3164]: W1124 00:17:31.272538 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.272629 kubelet[3164]: E1124 00:17:31.272592 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.272781 kubelet[3164]: E1124 00:17:31.272766 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.272863 kubelet[3164]: W1124 00:17:31.272827 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.272863 kubelet[3164]: E1124 00:17:31.272837 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.273083 kubelet[3164]: E1124 00:17:31.273039 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.273083 kubelet[3164]: W1124 00:17:31.273046 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.273083 kubelet[3164]: E1124 00:17:31.273053 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.273277 kubelet[3164]: E1124 00:17:31.273267 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.273386 kubelet[3164]: W1124 00:17:31.273308 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.273386 kubelet[3164]: E1124 00:17:31.273316 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.273606 kubelet[3164]: E1124 00:17:31.273555 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.273606 kubelet[3164]: W1124 00:17:31.273563 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.273606 kubelet[3164]: E1124 00:17:31.273572 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.273868 kubelet[3164]: E1124 00:17:31.273845 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.273972 kubelet[3164]: W1124 00:17:31.273931 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.273972 kubelet[3164]: E1124 00:17:31.273944 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.274167 kubelet[3164]: E1124 00:17:31.274161 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.274254 kubelet[3164]: W1124 00:17:31.274212 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.274254 kubelet[3164]: E1124 00:17:31.274222 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.274473 kubelet[3164]: E1124 00:17:31.274433 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.274473 kubelet[3164]: W1124 00:17:31.274441 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.274473 kubelet[3164]: E1124 00:17:31.274448 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.274715 kubelet[3164]: E1124 00:17:31.274670 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.274715 kubelet[3164]: W1124 00:17:31.274678 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.274715 kubelet[3164]: E1124 00:17:31.274688 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.274997 kubelet[3164]: E1124 00:17:31.274948 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.274997 kubelet[3164]: W1124 00:17:31.274957 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.274997 kubelet[3164]: E1124 00:17:31.274966 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.275277 kubelet[3164]: E1124 00:17:31.275222 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.275277 kubelet[3164]: W1124 00:17:31.275230 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.275277 kubelet[3164]: E1124 00:17:31.275239 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.285949 kubelet[3164]: E1124 00:17:31.285930 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.286106 kubelet[3164]: W1124 00:17:31.285980 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.286106 kubelet[3164]: E1124 00:17:31.285995 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.286416 kubelet[3164]: E1124 00:17:31.286387 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.286416 kubelet[3164]: W1124 00:17:31.286396 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.286416 kubelet[3164]: E1124 00:17:31.286407 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.287072 kubelet[3164]: E1124 00:17:31.287056 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.287333 kubelet[3164]: W1124 00:17:31.287254 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.287333 kubelet[3164]: E1124 00:17:31.287271 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.288314 kubelet[3164]: E1124 00:17:31.288172 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.288314 kubelet[3164]: W1124 00:17:31.288192 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.288314 kubelet[3164]: E1124 00:17:31.288208 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.289285 kubelet[3164]: E1124 00:17:31.289139 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.289479 kubelet[3164]: W1124 00:17:31.289154 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.289479 kubelet[3164]: E1124 00:17:31.289448 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.290312 kubelet[3164]: E1124 00:17:31.290174 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.290312 kubelet[3164]: W1124 00:17:31.290187 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.290312 kubelet[3164]: E1124 00:17:31.290202 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.291317 kubelet[3164]: E1124 00:17:31.291303 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.291478 kubelet[3164]: W1124 00:17:31.291389 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.291478 kubelet[3164]: E1124 00:17:31.291404 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.291938 kubelet[3164]: E1124 00:17:31.291919 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.291938 kubelet[3164]: W1124 00:17:31.291937 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.292034 kubelet[3164]: E1124 00:17:31.291950 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.292955 kubelet[3164]: E1124 00:17:31.292941 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.293309 kubelet[3164]: W1124 00:17:31.293033 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.293414 kubelet[3164]: E1124 00:17:31.293363 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.293570 kubelet[3164]: E1124 00:17:31.293563 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.293630 kubelet[3164]: W1124 00:17:31.293622 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.293679 kubelet[3164]: E1124 00:17:31.293672 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.293849 kubelet[3164]: E1124 00:17:31.293843 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.293891 kubelet[3164]: W1124 00:17:31.293886 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.293965 kubelet[3164]: E1124 00:17:31.293958 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.294151 kubelet[3164]: E1124 00:17:31.294128 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.294151 kubelet[3164]: W1124 00:17:31.294135 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.294151 kubelet[3164]: E1124 00:17:31.294143 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.294398 kubelet[3164]: E1124 00:17:31.294371 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.294398 kubelet[3164]: W1124 00:17:31.294379 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.294398 kubelet[3164]: E1124 00:17:31.294387 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.294660 kubelet[3164]: E1124 00:17:31.294635 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.294660 kubelet[3164]: W1124 00:17:31.294643 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.294660 kubelet[3164]: E1124 00:17:31.294652 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.295358 kubelet[3164]: E1124 00:17:31.294945 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.295446 kubelet[3164]: W1124 00:17:31.295434 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.295494 kubelet[3164]: E1124 00:17:31.295485 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.295846 kubelet[3164]: E1124 00:17:31.295739 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.295934 kubelet[3164]: W1124 00:17:31.295925 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.298268 kubelet[3164]: E1124 00:17:31.295958 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.298268 kubelet[3164]: E1124 00:17:31.296301 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.298268 kubelet[3164]: W1124 00:17:31.296309 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.298268 kubelet[3164]: E1124 00:17:31.296318 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.298268 kubelet[3164]: E1124 00:17:31.296461 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:17:31.298268 kubelet[3164]: W1124 00:17:31.296467 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:17:31.298268 kubelet[3164]: E1124 00:17:31.296475 3164 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:17:31.302597 containerd[1703]: time="2025-11-24T00:17:31.302552165Z" level=info msg="StartContainer for \"c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36\" returns successfully" Nov 24 00:17:31.310067 systemd[1]: cri-containerd-c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36.scope: Deactivated successfully. Nov 24 00:17:31.311191 containerd[1703]: time="2025-11-24T00:17:31.311160558Z" level=info msg="received container exit event container_id:\"c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36\" id:\"c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36\" pid:3888 exited_at:{seconds:1763943451 nanos:310847518}" Nov 24 00:17:31.334793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3c00335f09544829e3e77c7a8aa8e2060664e1931db1d18689c44a8fa88db36-rootfs.mount: Deactivated successfully. Nov 24 00:17:32.110392 kubelet[3164]: E1124 00:17:32.110346 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:34.110849 kubelet[3164]: E1124 00:17:34.110554 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:34.236391 containerd[1703]: time="2025-11-24T00:17:34.235474316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:17:36.111080 kubelet[3164]: E1124 00:17:36.110050 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:36.682982 containerd[1703]: time="2025-11-24T00:17:36.682936525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:36.686006 containerd[1703]: time="2025-11-24T00:17:36.685975424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:17:36.689628 containerd[1703]: time="2025-11-24T00:17:36.689595507Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:36.694165 containerd[1703]: time="2025-11-24T00:17:36.694135199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:36.694614 containerd[1703]: time="2025-11-24T00:17:36.694578346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.459062425s" Nov 24 00:17:36.694661 containerd[1703]: time="2025-11-24T00:17:36.694611893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:17:36.703928 containerd[1703]: time="2025-11-24T00:17:36.703877849Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:17:36.725706 containerd[1703]: time="2025-11-24T00:17:36.725675372Z" level=info msg="Container 9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:36.745016 containerd[1703]: time="2025-11-24T00:17:36.744970784Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36\"" Nov 24 00:17:36.745718 containerd[1703]: time="2025-11-24T00:17:36.745688343Z" level=info msg="StartContainer for \"9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36\"" Nov 24 00:17:36.747332 containerd[1703]: time="2025-11-24T00:17:36.747277597Z" level=info msg="connecting to shim 9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36" address="unix:///run/containerd/s/b6b3c633a666f109cd7f0d960c7d4ac9a741ca831aa3c8fba7daa5a3dd173d3b" protocol=ttrpc version=3 Nov 24 00:17:36.773064 systemd[1]: Started cri-containerd-9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36.scope - libcontainer container 9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36. Nov 24 00:17:36.830359 containerd[1703]: time="2025-11-24T00:17:36.830319081Z" level=info msg="StartContainer for \"9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36\" returns successfully" Nov 24 00:17:38.090653 containerd[1703]: time="2025-11-24T00:17:38.090600759Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:17:38.092668 systemd[1]: cri-containerd-9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36.scope: Deactivated successfully. Nov 24 00:17:38.093716 systemd[1]: cri-containerd-9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36.scope: Consumed 441ms CPU time, 200.5M memory peak, 171.3M written to disk. Nov 24 00:17:38.095761 containerd[1703]: time="2025-11-24T00:17:38.095731220Z" level=info msg="received container exit event container_id:\"9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36\" id:\"9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36\" pid:3979 exited_at:{seconds:1763943458 nanos:95505056}" Nov 24 00:17:38.110921 kubelet[3164]: E1124 00:17:38.110815 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:38.116592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b958aaf843b984aa71704295f23c438879b08ed8851718044a3ad172e089f36-rootfs.mount: Deactivated successfully. Nov 24 00:17:38.148073 kubelet[3164]: I1124 00:17:38.148049 3164 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:17:38.384710 systemd[1]: Created slice kubepods-burstable-pod9df6de2d_6161_464f_b632_ed239b828bf6.slice - libcontainer container kubepods-burstable-pod9df6de2d_6161_464f_b632_ed239b828bf6.slice. Nov 24 00:17:38.438808 kubelet[3164]: I1124 00:17:38.438753 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9df6de2d-6161-464f-b632-ed239b828bf6-config-volume\") pod \"coredns-674b8bbfcf-6q9k7\" (UID: \"9df6de2d-6161-464f-b632-ed239b828bf6\") " pod="kube-system/coredns-674b8bbfcf-6q9k7" Nov 24 00:17:38.438808 kubelet[3164]: I1124 00:17:38.438801 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpxb\" (UniqueName: \"kubernetes.io/projected/9df6de2d-6161-464f-b632-ed239b828bf6-kube-api-access-gxpxb\") pod \"coredns-674b8bbfcf-6q9k7\" (UID: \"9df6de2d-6161-464f-b632-ed239b828bf6\") " pod="kube-system/coredns-674b8bbfcf-6q9k7" Nov 24 00:17:38.586591 systemd[1]: Created slice kubepods-besteffort-pod2f2c6ff3_1189_42e9_9af0_0ec13ea4acf3.slice - libcontainer container kubepods-besteffort-pod2f2c6ff3_1189_42e9_9af0_0ec13ea4acf3.slice. Nov 24 00:17:38.640105 kubelet[3164]: I1124 00:17:38.639943 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-backend-key-pair\") pod \"whisker-7bc446466-4v2dc\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " pod="calico-system/whisker-7bc446466-4v2dc" Nov 24 00:17:38.640105 kubelet[3164]: I1124 00:17:38.639996 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-ca-bundle\") pod \"whisker-7bc446466-4v2dc\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " pod="calico-system/whisker-7bc446466-4v2dc" Nov 24 00:17:38.640105 kubelet[3164]: I1124 00:17:38.640013 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plngz\" (UniqueName: \"kubernetes.io/projected/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-kube-api-access-plngz\") pod \"whisker-7bc446466-4v2dc\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " pod="calico-system/whisker-7bc446466-4v2dc" Nov 24 00:17:38.688681 containerd[1703]: time="2025-11-24T00:17:38.688639656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6q9k7,Uid:9df6de2d-6161-464f-b632-ed239b828bf6,Namespace:kube-system,Attempt:0,}" Nov 24 00:17:39.024082 containerd[1703]: time="2025-11-24T00:17:39.023690806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc446466-4v2dc,Uid:2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:39.048279 systemd[1]: Created slice kubepods-besteffort-podb4e92bfb_9155_4b18_ad04_f06b341ea73b.slice - libcontainer container kubepods-besteffort-podb4e92bfb_9155_4b18_ad04_f06b341ea73b.slice. Nov 24 00:17:39.058156 systemd[1]: Created slice kubepods-besteffort-pod17c95f35_9a12_4372_90d3_ee8b8cc1e636.slice - libcontainer container kubepods-besteffort-pod17c95f35_9a12_4372_90d3_ee8b8cc1e636.slice. Nov 24 00:17:39.071874 systemd[1]: Created slice kubepods-burstable-podc88ea087_41f9_4607_b474_e4073ad22f81.slice - libcontainer container kubepods-burstable-podc88ea087_41f9_4607_b474_e4073ad22f81.slice. Nov 24 00:17:39.082724 systemd[1]: Created slice kubepods-besteffort-pode06c5900_d0dc_4011_934f_01926c96ebe8.slice - libcontainer container kubepods-besteffort-pode06c5900_d0dc_4011_934f_01926c96ebe8.slice. Nov 24 00:17:39.091474 systemd[1]: Created slice kubepods-besteffort-podea124eb0_3624_454a_aec9_841dde50238f.slice - libcontainer container kubepods-besteffort-podea124eb0_3624_454a_aec9_841dde50238f.slice. Nov 24 00:17:39.103746 systemd[1]: Created slice kubepods-besteffort-poda5868a48_f0a4_49b1_9a5f_48199ea4ea4e.slice - libcontainer container kubepods-besteffort-poda5868a48_f0a4_49b1_9a5f_48199ea4ea4e.slice. Nov 24 00:17:39.142649 kubelet[3164]: I1124 00:17:39.142618 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e06c5900-d0dc-4011-934f-01926c96ebe8-goldmane-key-pair\") pod \"goldmane-666569f655-8qltp\" (UID: \"e06c5900-d0dc-4011-934f-01926c96ebe8\") " pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.143593 kubelet[3164]: I1124 00:17:39.143185 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27kj\" (UniqueName: \"kubernetes.io/projected/c88ea087-41f9-4607-b474-e4073ad22f81-kube-api-access-s27kj\") pod \"coredns-674b8bbfcf-7htth\" (UID: \"c88ea087-41f9-4607-b474-e4073ad22f81\") " pod="kube-system/coredns-674b8bbfcf-7htth" Nov 24 00:17:39.143593 kubelet[3164]: I1124 00:17:39.143519 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06c5900-d0dc-4011-934f-01926c96ebe8-config\") pod \"goldmane-666569f655-8qltp\" (UID: \"e06c5900-d0dc-4011-934f-01926c96ebe8\") " pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.143877 kubelet[3164]: I1124 00:17:39.143793 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrf74\" (UniqueName: \"kubernetes.io/projected/e06c5900-d0dc-4011-934f-01926c96ebe8-kube-api-access-wrf74\") pod \"goldmane-666569f655-8qltp\" (UID: \"e06c5900-d0dc-4011-934f-01926c96ebe8\") " pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.143877 kubelet[3164]: I1124 00:17:39.143825 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5868a48-f0a4-49b1-9a5f-48199ea4ea4e-calico-apiserver-certs\") pod \"calico-apiserver-78fddc585d-2dpds\" (UID: \"a5868a48-f0a4-49b1-9a5f-48199ea4ea4e\") " pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" Nov 24 00:17:39.143877 kubelet[3164]: I1124 00:17:39.143845 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd6r\" (UniqueName: \"kubernetes.io/projected/a5868a48-f0a4-49b1-9a5f-48199ea4ea4e-kube-api-access-zkd6r\") pod \"calico-apiserver-78fddc585d-2dpds\" (UID: \"a5868a48-f0a4-49b1-9a5f-48199ea4ea4e\") " pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" Nov 24 00:17:39.143877 kubelet[3164]: I1124 00:17:39.143862 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4e92bfb-9155-4b18-ad04-f06b341ea73b-calico-apiserver-certs\") pod \"calico-apiserver-869ddb6fcd-cvhld\" (UID: \"b4e92bfb-9155-4b18-ad04-f06b341ea73b\") " pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" Nov 24 00:17:39.144118 kubelet[3164]: I1124 00:17:39.144057 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c88ea087-41f9-4607-b474-e4073ad22f81-config-volume\") pod \"coredns-674b8bbfcf-7htth\" (UID: \"c88ea087-41f9-4607-b474-e4073ad22f81\") " pod="kube-system/coredns-674b8bbfcf-7htth" Nov 24 00:17:39.144318 kubelet[3164]: I1124 00:17:39.144268 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gpf\" (UniqueName: \"kubernetes.io/projected/b4e92bfb-9155-4b18-ad04-f06b341ea73b-kube-api-access-q4gpf\") pod \"calico-apiserver-869ddb6fcd-cvhld\" (UID: \"b4e92bfb-9155-4b18-ad04-f06b341ea73b\") " pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" Nov 24 00:17:39.144318 kubelet[3164]: I1124 00:17:39.144290 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17c95f35-9a12-4372-90d3-ee8b8cc1e636-calico-apiserver-certs\") pod \"calico-apiserver-869ddb6fcd-pdbfs\" (UID: \"17c95f35-9a12-4372-90d3-ee8b8cc1e636\") " pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" Nov 24 00:17:39.144533 kubelet[3164]: I1124 00:17:39.144477 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea124eb0-3624-454a-aec9-841dde50238f-tigera-ca-bundle\") pod \"calico-kube-controllers-55c8987d79-wj8qt\" (UID: \"ea124eb0-3624-454a-aec9-841dde50238f\") " pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" Nov 24 00:17:39.144533 kubelet[3164]: I1124 00:17:39.144504 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjkv\" (UniqueName: \"kubernetes.io/projected/17c95f35-9a12-4372-90d3-ee8b8cc1e636-kube-api-access-vxjkv\") pod \"calico-apiserver-869ddb6fcd-pdbfs\" (UID: \"17c95f35-9a12-4372-90d3-ee8b8cc1e636\") " pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" Nov 24 00:17:39.144764 kubelet[3164]: I1124 00:17:39.144720 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvlqp\" (UniqueName: \"kubernetes.io/projected/ea124eb0-3624-454a-aec9-841dde50238f-kube-api-access-kvlqp\") pod \"calico-kube-controllers-55c8987d79-wj8qt\" (UID: \"ea124eb0-3624-454a-aec9-841dde50238f\") " pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" Nov 24 00:17:39.144975 kubelet[3164]: I1124 00:17:39.144892 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e06c5900-d0dc-4011-934f-01926c96ebe8-goldmane-ca-bundle\") pod \"goldmane-666569f655-8qltp\" (UID: \"e06c5900-d0dc-4011-934f-01926c96ebe8\") " pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.181172 containerd[1703]: time="2025-11-24T00:17:39.181120239Z" level=error msg="Failed to destroy network for sandbox \"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.185613 containerd[1703]: time="2025-11-24T00:17:39.185576835Z" level=error msg="Failed to destroy network for sandbox \"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.185749 systemd[1]: run-netns-cni\x2d28342b73\x2d6832\x2d41b8\x2d6cb1\x2d4e8d1560b3d3.mount: Deactivated successfully. Nov 24 00:17:39.188745 systemd[1]: run-netns-cni\x2dda2de55e\x2dfeb9\x2dc8fe\x2dce54\x2d117e34c85c85.mount: Deactivated successfully. Nov 24 00:17:39.190538 containerd[1703]: time="2025-11-24T00:17:39.190497637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6q9k7,Uid:9df6de2d-6161-464f-b632-ed239b828bf6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.190796 kubelet[3164]: E1124 00:17:39.190764 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.190970 kubelet[3164]: E1124 00:17:39.190947 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6q9k7" Nov 24 00:17:39.191022 kubelet[3164]: E1124 00:17:39.190981 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6q9k7" Nov 24 00:17:39.191300 kubelet[3164]: E1124 00:17:39.191058 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6q9k7_kube-system(9df6de2d-6161-464f-b632-ed239b828bf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6q9k7_kube-system(9df6de2d-6161-464f-b632-ed239b828bf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71af9f3dd02f5b3c1d819fe22d4cbbabe121bcf3771ca22e50b9cd6615573eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6q9k7" podUID="9df6de2d-6161-464f-b632-ed239b828bf6" Nov 24 00:17:39.193517 containerd[1703]: time="2025-11-24T00:17:39.193481220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc446466-4v2dc,Uid:2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.193772 kubelet[3164]: E1124 00:17:39.193745 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.193844 kubelet[3164]: E1124 00:17:39.193793 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc446466-4v2dc" Nov 24 00:17:39.193844 kubelet[3164]: E1124 00:17:39.193814 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc446466-4v2dc" Nov 24 00:17:39.193931 kubelet[3164]: E1124 00:17:39.193867 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bc446466-4v2dc_calico-system(2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bc446466-4v2dc_calico-system(2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c811e690587cc96e38197d63876bb9e4a602c99d1942e2ae88170d1990aa2bc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc446466-4v2dc" podUID="2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3" Nov 24 00:17:39.284539 containerd[1703]: time="2025-11-24T00:17:39.284239163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:17:39.357712 containerd[1703]: time="2025-11-24T00:17:39.357667867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-cvhld,Uid:b4e92bfb-9155-4b18-ad04-f06b341ea73b,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:39.374542 containerd[1703]: time="2025-11-24T00:17:39.374230898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-pdbfs,Uid:17c95f35-9a12-4372-90d3-ee8b8cc1e636,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:39.378295 containerd[1703]: time="2025-11-24T00:17:39.378263202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7htth,Uid:c88ea087-41f9-4607-b474-e4073ad22f81,Namespace:kube-system,Attempt:0,}" Nov 24 00:17:39.389441 containerd[1703]: time="2025-11-24T00:17:39.389263613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8qltp,Uid:e06c5900-d0dc-4011-934f-01926c96ebe8,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:39.404703 containerd[1703]: time="2025-11-24T00:17:39.404636755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8987d79-wj8qt,Uid:ea124eb0-3624-454a-aec9-841dde50238f,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:39.409836 containerd[1703]: time="2025-11-24T00:17:39.409088340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fddc585d-2dpds,Uid:a5868a48-f0a4-49b1-9a5f-48199ea4ea4e,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:39.459369 containerd[1703]: time="2025-11-24T00:17:39.459316702Z" level=error msg="Failed to destroy network for sandbox \"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.465868 containerd[1703]: time="2025-11-24T00:17:39.465815218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-cvhld,Uid:b4e92bfb-9155-4b18-ad04-f06b341ea73b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.466221 kubelet[3164]: E1124 00:17:39.466190 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.466912 kubelet[3164]: E1124 00:17:39.466541 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" Nov 24 00:17:39.466912 kubelet[3164]: E1124 00:17:39.466586 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" Nov 24 00:17:39.466912 kubelet[3164]: E1124 00:17:39.466677 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38896a69c81d7be28269e7a40c63d33e1be3333c5babcb3f6796c151efdffbc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:17:39.528144 containerd[1703]: time="2025-11-24T00:17:39.528091611Z" level=error msg="Failed to destroy network for sandbox \"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.532402 containerd[1703]: time="2025-11-24T00:17:39.532326978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-pdbfs,Uid:17c95f35-9a12-4372-90d3-ee8b8cc1e636,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.533814 kubelet[3164]: E1124 00:17:39.533394 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.533814 kubelet[3164]: E1124 00:17:39.533644 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" Nov 24 00:17:39.533814 kubelet[3164]: E1124 00:17:39.533687 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" Nov 24 00:17:39.534077 kubelet[3164]: E1124 00:17:39.533764 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24bd409bf5081bcb1c709d36a6f283e52a565bad75e0eaca2cf40ce556a873ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:17:39.541773 containerd[1703]: time="2025-11-24T00:17:39.541674849Z" level=error msg="Failed to destroy network for sandbox \"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.546216 containerd[1703]: time="2025-11-24T00:17:39.546173470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8qltp,Uid:e06c5900-d0dc-4011-934f-01926c96ebe8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.546722 kubelet[3164]: E1124 00:17:39.546648 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.547020 kubelet[3164]: E1124 00:17:39.546703 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.547020 kubelet[3164]: E1124 00:17:39.546963 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8qltp" Nov 24 00:17:39.547774 kubelet[3164]: E1124 00:17:39.547419 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db935c249876aaf65f0043e4f86de48b2626baefeba932fc602c446eba12a13b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:17:39.553365 containerd[1703]: time="2025-11-24T00:17:39.553329821Z" level=error msg="Failed to destroy network for sandbox \"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.557821 containerd[1703]: time="2025-11-24T00:17:39.557784881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7htth,Uid:c88ea087-41f9-4607-b474-e4073ad22f81,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.558329 kubelet[3164]: E1124 00:17:39.558127 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.558329 kubelet[3164]: E1124 00:17:39.558276 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7htth" Nov 24 00:17:39.558329 kubelet[3164]: E1124 00:17:39.558296 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7htth" Nov 24 00:17:39.558706 kubelet[3164]: E1124 00:17:39.558477 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7htth_kube-system(c88ea087-41f9-4607-b474-e4073ad22f81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7htth_kube-system(c88ea087-41f9-4607-b474-e4073ad22f81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41ea48bbc4497dbbc421c45dea67100a1904ada371d76260812362e2ed4e93c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7htth" podUID="c88ea087-41f9-4607-b474-e4073ad22f81" Nov 24 00:17:39.575152 containerd[1703]: time="2025-11-24T00:17:39.575068936Z" level=error msg="Failed to destroy network for sandbox \"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.578600 containerd[1703]: time="2025-11-24T00:17:39.578550377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8987d79-wj8qt,Uid:ea124eb0-3624-454a-aec9-841dde50238f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.579063 kubelet[3164]: E1124 00:17:39.578960 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.579063 kubelet[3164]: E1124 00:17:39.579010 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" Nov 24 00:17:39.579063 kubelet[3164]: E1124 00:17:39.579031 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" Nov 24 00:17:39.579259 kubelet[3164]: E1124 00:17:39.579227 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2934d41176f2f36d53ce48e85478dff543be95cdb5b9d6f44d8c67c32badce8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:17:39.581410 containerd[1703]: time="2025-11-24T00:17:39.581380733Z" level=error msg="Failed to destroy network for sandbox \"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.584701 containerd[1703]: time="2025-11-24T00:17:39.584610341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fddc585d-2dpds,Uid:a5868a48-f0a4-49b1-9a5f-48199ea4ea4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.585075 kubelet[3164]: E1124 00:17:39.584892 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:39.585075 kubelet[3164]: E1124 00:17:39.584968 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" Nov 24 00:17:39.585075 kubelet[3164]: E1124 00:17:39.584988 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" Nov 24 00:17:39.585177 kubelet[3164]: E1124 00:17:39.585040 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342147655b86ade785c4c3180a4b0e0d590e7a92be3a63e8ac809d9a0e08c7eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:17:40.128301 systemd[1]: Created slice kubepods-besteffort-pod84288287_c520_476c_9981_2956ccc0c1dc.slice - libcontainer container kubepods-besteffort-pod84288287_c520_476c_9981_2956ccc0c1dc.slice. Nov 24 00:17:40.130577 containerd[1703]: time="2025-11-24T00:17:40.130534330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtqbh,Uid:84288287-c520-476c-9981-2956ccc0c1dc,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:40.200914 containerd[1703]: time="2025-11-24T00:17:40.200847212Z" level=error msg="Failed to destroy network for sandbox \"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:40.203579 systemd[1]: run-netns-cni\x2d066e43e6\x2df7ad\x2dd7e0\x2df7be\x2d39c3b3c9f23c.mount: Deactivated successfully. Nov 24 00:17:40.206495 containerd[1703]: time="2025-11-24T00:17:40.206436474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtqbh,Uid:84288287-c520-476c-9981-2956ccc0c1dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:40.206746 kubelet[3164]: E1124 00:17:40.206694 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:17:40.207066 kubelet[3164]: E1124 00:17:40.206769 3164 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:40.207066 kubelet[3164]: E1124 00:17:40.206793 3164 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jtqbh" Nov 24 00:17:40.207066 kubelet[3164]: E1124 00:17:40.206844 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc2c3e4f97584c367219c60bd49cb77196644c897c1fd0a0805ddeff601bcd41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:43.486781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649755884.mount: Deactivated successfully. Nov 24 00:17:43.518298 containerd[1703]: time="2025-11-24T00:17:43.517255456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:43.520039 containerd[1703]: time="2025-11-24T00:17:43.519986371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:17:43.523676 containerd[1703]: time="2025-11-24T00:17:43.523652207Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:43.527753 containerd[1703]: time="2025-11-24T00:17:43.527717484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:17:43.528043 containerd[1703]: time="2025-11-24T00:17:43.528017603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.24373395s" Nov 24 00:17:43.528084 containerd[1703]: time="2025-11-24T00:17:43.528054977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:17:43.551293 containerd[1703]: time="2025-11-24T00:17:43.551257893Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:17:43.576648 containerd[1703]: time="2025-11-24T00:17:43.575223921Z" level=info msg="Container 1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:43.599075 containerd[1703]: time="2025-11-24T00:17:43.599039984Z" level=info msg="CreateContainer within sandbox \"dc051fa65b65e7e14e99904e4ca941cb093b14fac32c7d2ef7d7d3a9943d6b6d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00\"" Nov 24 00:17:43.599583 containerd[1703]: time="2025-11-24T00:17:43.599558987Z" level=info msg="StartContainer for \"1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00\"" Nov 24 00:17:43.601034 containerd[1703]: time="2025-11-24T00:17:43.601000608Z" level=info msg="connecting to shim 1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00" address="unix:///run/containerd/s/b6b3c633a666f109cd7f0d960c7d4ac9a741ca831aa3c8fba7daa5a3dd173d3b" protocol=ttrpc version=3 Nov 24 00:17:43.622061 systemd[1]: Started cri-containerd-1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00.scope - libcontainer container 1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00. Nov 24 00:17:43.690943 containerd[1703]: time="2025-11-24T00:17:43.690852404Z" level=info msg="StartContainer for \"1c8e0f5982ef473dd657d8ef03e315fc61fd8e2ec4abf1043d7cc22899f45f00\" returns successfully" Nov 24 00:17:43.947871 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:17:43.948018 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:17:44.389135 kubelet[3164]: I1124 00:17:44.389091 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plngz\" (UniqueName: \"kubernetes.io/projected/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-kube-api-access-plngz\") pod \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " Nov 24 00:17:44.389135 kubelet[3164]: I1124 00:17:44.389148 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-ca-bundle\") pod \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " Nov 24 00:17:44.389602 kubelet[3164]: I1124 00:17:44.389186 3164 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-backend-key-pair\") pod \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\" (UID: \"2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3\") " Nov 24 00:17:44.398039 kubelet[3164]: I1124 00:17:44.397964 3164 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3" (UID: "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:17:44.398349 kubelet[3164]: I1124 00:17:44.398252 3164 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3" (UID: "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:17:44.400073 kubelet[3164]: I1124 00:17:44.400037 3164 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-kube-api-access-plngz" (OuterVolumeSpecName: "kube-api-access-plngz") pod "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3" (UID: "2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3"). InnerVolumeSpecName "kube-api-access-plngz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:17:44.488410 systemd[1]: var-lib-kubelet-pods-2f2c6ff3\x2d1189\x2d42e9\x2d9af0\x2d0ec13ea4acf3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dplngz.mount: Deactivated successfully. Nov 24 00:17:44.489231 systemd[1]: var-lib-kubelet-pods-2f2c6ff3\x2d1189\x2d42e9\x2d9af0\x2d0ec13ea4acf3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:17:44.490103 kubelet[3164]: I1124 00:17:44.489543 3164 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-ca-bundle\") on node \"ci-4459.2.1-a-980c694365\" DevicePath \"\"" Nov 24 00:17:44.490103 kubelet[3164]: I1124 00:17:44.489575 3164 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-whisker-backend-key-pair\") on node \"ci-4459.2.1-a-980c694365\" DevicePath \"\"" Nov 24 00:17:44.490103 kubelet[3164]: I1124 00:17:44.489585 3164 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plngz\" (UniqueName: \"kubernetes.io/projected/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3-kube-api-access-plngz\") on node \"ci-4459.2.1-a-980c694365\" DevicePath \"\"" Nov 24 00:17:44.599848 systemd[1]: Removed slice kubepods-besteffort-pod2f2c6ff3_1189_42e9_9af0_0ec13ea4acf3.slice - libcontainer container kubepods-besteffort-pod2f2c6ff3_1189_42e9_9af0_0ec13ea4acf3.slice. Nov 24 00:17:44.617029 kubelet[3164]: I1124 00:17:44.616969 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qsgps" podStartSLOduration=2.575565139 podStartE2EDuration="17.616952593s" podCreationTimestamp="2025-11-24 00:17:27 +0000 UTC" firstStartedPulling="2025-11-24 00:17:28.48730737 +0000 UTC m=+23.460328781" lastFinishedPulling="2025-11-24 00:17:43.528694838 +0000 UTC m=+38.501716235" observedRunningTime="2025-11-24 00:17:44.422823951 +0000 UTC m=+39.395845382" watchObservedRunningTime="2025-11-24 00:17:44.616952593 +0000 UTC m=+39.589974004" Nov 24 00:17:44.850125 systemd[1]: Created slice kubepods-besteffort-pod0938abdc_cc2b_4018_9eeb_6e2be7bfa61a.slice - libcontainer container kubepods-besteffort-pod0938abdc_cc2b_4018_9eeb_6e2be7bfa61a.slice. Nov 24 00:17:44.892145 kubelet[3164]: I1124 00:17:44.892079 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0938abdc-cc2b-4018-9eeb-6e2be7bfa61a-whisker-backend-key-pair\") pod \"whisker-dbb9bbbc6-vdbzq\" (UID: \"0938abdc-cc2b-4018-9eeb-6e2be7bfa61a\") " pod="calico-system/whisker-dbb9bbbc6-vdbzq" Nov 24 00:17:44.892145 kubelet[3164]: I1124 00:17:44.892133 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wz6\" (UniqueName: \"kubernetes.io/projected/0938abdc-cc2b-4018-9eeb-6e2be7bfa61a-kube-api-access-56wz6\") pod \"whisker-dbb9bbbc6-vdbzq\" (UID: \"0938abdc-cc2b-4018-9eeb-6e2be7bfa61a\") " pod="calico-system/whisker-dbb9bbbc6-vdbzq" Nov 24 00:17:44.892145 kubelet[3164]: I1124 00:17:44.892156 3164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0938abdc-cc2b-4018-9eeb-6e2be7bfa61a-whisker-ca-bundle\") pod \"whisker-dbb9bbbc6-vdbzq\" (UID: \"0938abdc-cc2b-4018-9eeb-6e2be7bfa61a\") " pod="calico-system/whisker-dbb9bbbc6-vdbzq" Nov 24 00:17:45.113250 kubelet[3164]: I1124 00:17:45.113150 3164 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3" path="/var/lib/kubelet/pods/2f2c6ff3-1189-42e9-9af0-0ec13ea4acf3/volumes" Nov 24 00:17:45.153645 containerd[1703]: time="2025-11-24T00:17:45.153597935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dbb9bbbc6-vdbzq,Uid:0938abdc-cc2b-4018-9eeb-6e2be7bfa61a,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:45.285879 systemd-networkd[1336]: cali2368df93f9c: Link UP Nov 24 00:17:45.288039 systemd-networkd[1336]: cali2368df93f9c: Gained carrier Nov 24 00:17:45.308519 containerd[1703]: 2025-11-24 00:17:45.186 [INFO][4335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:17:45.308519 containerd[1703]: 2025-11-24 00:17:45.195 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0 whisker-dbb9bbbc6- calico-system 0938abdc-cc2b-4018-9eeb-6e2be7bfa61a 913 0 2025-11-24 00:17:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dbb9bbbc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 whisker-dbb9bbbc6-vdbzq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2368df93f9c [] [] }} ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-" Nov 24 00:17:45.308519 containerd[1703]: 2025-11-24 00:17:45.195 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.308519 containerd[1703]: 2025-11-24 00:17:45.216 [INFO][4346] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" HandleID="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Workload="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.216 [INFO][4346] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" HandleID="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Workload="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-980c694365", "pod":"whisker-dbb9bbbc6-vdbzq", "timestamp":"2025-11-24 00:17:45.216548331 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.216 [INFO][4346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.216 [INFO][4346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.216 [INFO][4346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.221 [INFO][4346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.224 [INFO][4346] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.227 [INFO][4346] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.228 [INFO][4346] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.308774 containerd[1703]: 2025-11-24 00:17:45.229 [INFO][4346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.229 [INFO][4346] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.231 [INFO][4346] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037 Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.236 [INFO][4346] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.247 [INFO][4346] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.65/26] block=192.168.69.64/26 handle="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.247 [INFO][4346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.65/26] handle="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.247 [INFO][4346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:45.309048 containerd[1703]: 2025-11-24 00:17:45.247 [INFO][4346] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.65/26] IPv6=[] ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" HandleID="k8s-pod-network.f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Workload="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.309237 containerd[1703]: 2025-11-24 00:17:45.250 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0", GenerateName:"whisker-dbb9bbbc6-", Namespace:"calico-system", SelfLink:"", UID:"0938abdc-cc2b-4018-9eeb-6e2be7bfa61a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dbb9bbbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"whisker-dbb9bbbc6-vdbzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2368df93f9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:45.309237 containerd[1703]: 2025-11-24 00:17:45.250 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.65/32] ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.309326 containerd[1703]: 2025-11-24 00:17:45.250 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2368df93f9c ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.309326 containerd[1703]: 2025-11-24 00:17:45.288 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.309373 containerd[1703]: 2025-11-24 00:17:45.289 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0", GenerateName:"whisker-dbb9bbbc6-", Namespace:"calico-system", SelfLink:"", UID:"0938abdc-cc2b-4018-9eeb-6e2be7bfa61a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dbb9bbbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037", Pod:"whisker-dbb9bbbc6-vdbzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2368df93f9c", MAC:"a2:af:69:69:9e:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:45.309439 containerd[1703]: 2025-11-24 00:17:45.305 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" Namespace="calico-system" Pod="whisker-dbb9bbbc6-vdbzq" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-whisker--dbb9bbbc6--vdbzq-eth0" Nov 24 00:17:45.355124 containerd[1703]: time="2025-11-24T00:17:45.354985249Z" level=info msg="connecting to shim f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037" address="unix:///run/containerd/s/57c8c474bb1f65e62e8af2833eea7f0954db50e9f51647f7c6e12c6dcd1f8501" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:45.379246 systemd[1]: Started cri-containerd-f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037.scope - libcontainer container f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037. Nov 24 00:17:45.422276 containerd[1703]: time="2025-11-24T00:17:45.422239207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dbb9bbbc6-vdbzq,Uid:0938abdc-cc2b-4018-9eeb-6e2be7bfa61a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5407c53cda207d6426b6b47d81a74f5708d9af4f2862db1786602cc6ee4b037\"" Nov 24 00:17:45.423665 containerd[1703]: time="2025-11-24T00:17:45.423635993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:17:45.686518 containerd[1703]: time="2025-11-24T00:17:45.686383079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:45.689602 containerd[1703]: time="2025-11-24T00:17:45.689484767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:17:45.689602 containerd[1703]: time="2025-11-24T00:17:45.689507161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:17:45.689771 kubelet[3164]: E1124 00:17:45.689735 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:17:45.690090 kubelet[3164]: E1124 00:17:45.689790 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:17:45.690118 kubelet[3164]: E1124 00:17:45.689977 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd95a4842830464d80326ed16998cd18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:45.692362 containerd[1703]: time="2025-11-24T00:17:45.692331594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:17:45.972095 containerd[1703]: time="2025-11-24T00:17:45.972030970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:45.975203 containerd[1703]: time="2025-11-24T00:17:45.975152167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:17:45.975380 containerd[1703]: time="2025-11-24T00:17:45.975188445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:17:45.975562 kubelet[3164]: E1124 00:17:45.975527 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:17:45.975612 kubelet[3164]: E1124 00:17:45.975576 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:17:45.975957 kubelet[3164]: E1124 00:17:45.975737 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:45.977006 kubelet[3164]: E1124 00:17:45.976956 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:17:46.307542 kubelet[3164]: E1124 00:17:46.307395 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:17:46.917057 systemd-networkd[1336]: cali2368df93f9c: Gained IPv6LL Nov 24 00:17:47.306761 kubelet[3164]: E1124 00:17:47.306713 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:17:47.339615 kubelet[3164]: I1124 00:17:47.339502 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:17:48.034845 systemd-networkd[1336]: vxlan.calico: Link UP Nov 24 00:17:48.037144 systemd-networkd[1336]: vxlan.calico: Gained carrier Nov 24 00:17:48.756576 kubelet[3164]: I1124 00:17:48.756531 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:17:49.477115 systemd-networkd[1336]: vxlan.calico: Gained IPv6LL Nov 24 00:17:50.111064 containerd[1703]: time="2025-11-24T00:17:50.110883559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8987d79-wj8qt,Uid:ea124eb0-3624-454a-aec9-841dde50238f,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:50.212434 systemd-networkd[1336]: calia70cb877e14: Link UP Nov 24 00:17:50.213383 systemd-networkd[1336]: calia70cb877e14: Gained carrier Nov 24 00:17:50.232824 containerd[1703]: 2025-11-24 00:17:50.152 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0 calico-kube-controllers-55c8987d79- calico-system ea124eb0-3624-454a-aec9-841dde50238f 854 0 2025-11-24 00:17:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c8987d79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 calico-kube-controllers-55c8987d79-wj8qt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia70cb877e14 [] [] }} ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-" Nov 24 00:17:50.232824 containerd[1703]: 2025-11-24 00:17:50.152 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.232824 containerd[1703]: 2025-11-24 00:17:50.175 [INFO][4716] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" HandleID="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Workload="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.176 [INFO][4716] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" HandleID="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Workload="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-980c694365", "pod":"calico-kube-controllers-55c8987d79-wj8qt", "timestamp":"2025-11-24 00:17:50.175956118 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.176 [INFO][4716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.176 [INFO][4716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.176 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.181 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.185 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.188 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.190 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234400 containerd[1703]: 2025-11-24 00:17:50.191 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.191 [INFO][4716] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.192 [INFO][4716] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7 Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.197 [INFO][4716] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.207 [INFO][4716] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.66/26] block=192.168.69.64/26 handle="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.207 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.66/26] handle="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.207 [INFO][4716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:50.234832 containerd[1703]: 2025-11-24 00:17:50.207 [INFO][4716] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.66/26] IPv6=[] ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" HandleID="k8s-pod-network.406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Workload="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.235180 containerd[1703]: 2025-11-24 00:17:50.209 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0", GenerateName:"calico-kube-controllers-55c8987d79-", Namespace:"calico-system", SelfLink:"", UID:"ea124eb0-3624-454a-aec9-841dde50238f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8987d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"calico-kube-controllers-55c8987d79-wj8qt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia70cb877e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:50.235273 containerd[1703]: 2025-11-24 00:17:50.209 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.66/32] ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.235273 containerd[1703]: 2025-11-24 00:17:50.209 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia70cb877e14 ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.235273 containerd[1703]: 2025-11-24 00:17:50.213 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.235364 containerd[1703]: 2025-11-24 00:17:50.214 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0", GenerateName:"calico-kube-controllers-55c8987d79-", Namespace:"calico-system", SelfLink:"", UID:"ea124eb0-3624-454a-aec9-841dde50238f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8987d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7", Pod:"calico-kube-controllers-55c8987d79-wj8qt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia70cb877e14", MAC:"d6:44:08:79:ba:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:50.235442 containerd[1703]: 2025-11-24 00:17:50.226 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" Namespace="calico-system" Pod="calico-kube-controllers-55c8987d79-wj8qt" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--kube--controllers--55c8987d79--wj8qt-eth0" Nov 24 00:17:50.285650 containerd[1703]: time="2025-11-24T00:17:50.285606945Z" level=info msg="connecting to shim 406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7" address="unix:///run/containerd/s/07f9240676a808d612c827700f0a2a24b0fa11dd17010412cb56ce924071f10b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:50.312301 systemd[1]: Started cri-containerd-406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7.scope - libcontainer container 406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7. Nov 24 00:17:50.364293 containerd[1703]: time="2025-11-24T00:17:50.364178488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8987d79-wj8qt,Uid:ea124eb0-3624-454a-aec9-841dde50238f,Namespace:calico-system,Attempt:0,} returns sandbox id \"406871513e5a94c3d0a9546976bb5606f84435cc6e152b3c7d55354406dc28d7\"" Nov 24 00:17:50.366246 containerd[1703]: time="2025-11-24T00:17:50.366210919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:17:50.627291 containerd[1703]: time="2025-11-24T00:17:50.627160075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:50.630670 containerd[1703]: time="2025-11-24T00:17:50.630629837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:17:50.630780 containerd[1703]: time="2025-11-24T00:17:50.630723880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:17:50.630991 kubelet[3164]: E1124 00:17:50.630953 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:17:50.631314 kubelet[3164]: E1124 00:17:50.631005 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:17:50.631314 kubelet[3164]: E1124 00:17:50.631170 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvlqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:50.633106 kubelet[3164]: E1124 00:17:50.633054 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:17:51.112163 containerd[1703]: time="2025-11-24T00:17:51.112087045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-pdbfs,Uid:17c95f35-9a12-4372-90d3-ee8b8cc1e636,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:51.209664 systemd-networkd[1336]: cali4b1e230592b: Link UP Nov 24 00:17:51.210583 systemd-networkd[1336]: cali4b1e230592b: Gained carrier Nov 24 00:17:51.226821 containerd[1703]: 2025-11-24 00:17:51.153 [INFO][4777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0 calico-apiserver-869ddb6fcd- calico-apiserver 17c95f35-9a12-4372-90d3-ee8b8cc1e636 852 0 2025-11-24 00:17:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:869ddb6fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 calico-apiserver-869ddb6fcd-pdbfs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b1e230592b [] [] }} ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-" Nov 24 00:17:51.226821 containerd[1703]: 2025-11-24 00:17:51.153 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.226821 containerd[1703]: 2025-11-24 00:17:51.175 [INFO][4788] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" HandleID="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.175 [INFO][4788] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" HandleID="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-980c694365", "pod":"calico-apiserver-869ddb6fcd-pdbfs", "timestamp":"2025-11-24 00:17:51.175003737 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.175 [INFO][4788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.175 [INFO][4788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.175 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.180 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.183 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.186 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.187 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227149 containerd[1703]: 2025-11-24 00:17:51.189 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.189 [INFO][4788] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.190 [INFO][4788] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39 Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.194 [INFO][4788] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.205 [INFO][4788] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.67/26] block=192.168.69.64/26 handle="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.205 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.67/26] handle="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.205 [INFO][4788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:51.227814 containerd[1703]: 2025-11-24 00:17:51.206 [INFO][4788] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.67/26] IPv6=[] ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" HandleID="k8s-pod-network.f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.228481 containerd[1703]: 2025-11-24 00:17:51.207 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0", GenerateName:"calico-apiserver-869ddb6fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"17c95f35-9a12-4372-90d3-ee8b8cc1e636", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869ddb6fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"calico-apiserver-869ddb6fcd-pdbfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b1e230592b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:51.228994 containerd[1703]: 2025-11-24 00:17:51.207 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.67/32] ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.228994 containerd[1703]: 2025-11-24 00:17:51.207 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b1e230592b ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.228994 containerd[1703]: 2025-11-24 00:17:51.210 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.229130 containerd[1703]: 2025-11-24 00:17:51.210 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0", GenerateName:"calico-apiserver-869ddb6fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"17c95f35-9a12-4372-90d3-ee8b8cc1e636", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869ddb6fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39", Pod:"calico-apiserver-869ddb6fcd-pdbfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b1e230592b", MAC:"5a:be:94:0a:94:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:51.229239 containerd[1703]: 2025-11-24 00:17:51.224 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-pdbfs" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--pdbfs-eth0" Nov 24 00:17:51.275521 containerd[1703]: time="2025-11-24T00:17:51.274999155Z" level=info msg="connecting to shim f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39" address="unix:///run/containerd/s/860a7bbab0fffda0bb7abab8b67cce72def6745eafdce86a7e03f6ed45d326a1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:51.301047 systemd[1]: Started cri-containerd-f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39.scope - libcontainer container f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39. Nov 24 00:17:51.319091 kubelet[3164]: E1124 00:17:51.319047 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:17:51.361118 containerd[1703]: time="2025-11-24T00:17:51.361079356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-pdbfs,Uid:17c95f35-9a12-4372-90d3-ee8b8cc1e636,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f2e46ba60a89373bb371452ad718b31f3735096d78352bb6f353ebc9efe83a39\"" Nov 24 00:17:51.363158 containerd[1703]: time="2025-11-24T00:17:51.362373630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:17:51.619154 containerd[1703]: time="2025-11-24T00:17:51.619020374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:51.622845 containerd[1703]: time="2025-11-24T00:17:51.622783542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:17:51.622997 containerd[1703]: time="2025-11-24T00:17:51.622807118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:17:51.623142 kubelet[3164]: E1124 00:17:51.623101 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:51.623189 kubelet[3164]: E1124 00:17:51.623159 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:51.623339 kubelet[3164]: E1124 00:17:51.623292 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxjkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:51.624863 kubelet[3164]: E1124 00:17:51.624793 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:17:51.653041 systemd-networkd[1336]: calia70cb877e14: Gained IPv6LL Nov 24 00:17:52.111724 containerd[1703]: time="2025-11-24T00:17:52.111664660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtqbh,Uid:84288287-c520-476c-9981-2956ccc0c1dc,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:52.112595 containerd[1703]: time="2025-11-24T00:17:52.112387527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6q9k7,Uid:9df6de2d-6161-464f-b632-ed239b828bf6,Namespace:kube-system,Attempt:0,}" Nov 24 00:17:52.241389 systemd-networkd[1336]: calid7ac7c36371: Link UP Nov 24 00:17:52.243449 systemd-networkd[1336]: calid7ac7c36371: Gained carrier Nov 24 00:17:52.293027 systemd-networkd[1336]: cali4b1e230592b: Gained IPv6LL Nov 24 00:17:52.322708 kubelet[3164]: E1124 00:17:52.321324 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:17:52.323412 kubelet[3164]: E1124 00:17:52.322491 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:17:52.331943 containerd[1703]: 2025-11-24 00:17:52.163 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0 csi-node-driver- calico-system 84288287-c520-476c-9981-2956ccc0c1dc 742 0 2025-11-24 00:17:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 csi-node-driver-jtqbh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid7ac7c36371 [] [] }} ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-" Nov 24 00:17:52.331943 containerd[1703]: 2025-11-24 00:17:52.163 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.331943 containerd[1703]: 2025-11-24 00:17:52.195 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" HandleID="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Workload="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.196 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" HandleID="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Workload="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-980c694365", "pod":"csi-node-driver-jtqbh", "timestamp":"2025-11-24 00:17:52.195968262 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.196 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.196 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.196 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.204 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.207 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.210 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.212 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.332390 containerd[1703]: 2025-11-24 00:17:52.213 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.213 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.214 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0 Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.218 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.68/26] block=192.168.69.64/26 handle="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.68/26] handle="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:52.334193 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.68/26] IPv6=[] ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" HandleID="k8s-pod-network.4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Workload="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.334380 containerd[1703]: 2025-11-24 00:17:52.233 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84288287-c520-476c-9981-2956ccc0c1dc", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"csi-node-driver-jtqbh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7ac7c36371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:52.334466 containerd[1703]: 2025-11-24 00:17:52.233 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.68/32] ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.334466 containerd[1703]: 2025-11-24 00:17:52.233 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7ac7c36371 ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.334466 containerd[1703]: 2025-11-24 00:17:52.243 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.334534 containerd[1703]: 2025-11-24 00:17:52.244 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84288287-c520-476c-9981-2956ccc0c1dc", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0", Pod:"csi-node-driver-jtqbh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7ac7c36371", MAC:"e6:b4:72:ec:87:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:52.334609 containerd[1703]: 2025-11-24 00:17:52.325 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" Namespace="calico-system" Pod="csi-node-driver-jtqbh" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-csi--node--driver--jtqbh-eth0" Nov 24 00:17:52.392255 containerd[1703]: time="2025-11-24T00:17:52.392050479Z" level=info msg="connecting to shim 4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0" address="unix:///run/containerd/s/855dfd95b69909580bd8036f00d4b85be681c53e801c5e89f253d7a1e4b00708" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:52.418051 systemd[1]: Started cri-containerd-4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0.scope - libcontainer container 4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0. Nov 24 00:17:52.441308 containerd[1703]: time="2025-11-24T00:17:52.441215132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jtqbh,Uid:84288287-c520-476c-9981-2956ccc0c1dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ccba00fa34649b2b758cb27e7aa47b7224d0a0dbba3e8ca00993ecc1a43b8a0\"" Nov 24 00:17:52.444658 containerd[1703]: time="2025-11-24T00:17:52.444614943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:17:52.544675 systemd-networkd[1336]: cali83aa08f2910: Link UP Nov 24 00:17:52.546452 systemd-networkd[1336]: cali83aa08f2910: Gained carrier Nov 24 00:17:52.576059 containerd[1703]: 2025-11-24 00:17:52.170 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0 coredns-674b8bbfcf- kube-system 9df6de2d-6161-464f-b632-ed239b828bf6 847 0 2025-11-24 00:17:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 coredns-674b8bbfcf-6q9k7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83aa08f2910 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-" Nov 24 00:17:52.576059 containerd[1703]: 2025-11-24 00:17:52.170 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576059 containerd[1703]: 2025-11-24 00:17:52.204 [INFO][4876] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" HandleID="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.204 [INFO][4876] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" HandleID="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-980c694365", "pod":"coredns-674b8bbfcf-6q9k7", "timestamp":"2025-11-24 00:17:52.20403405 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.204 [INFO][4876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.230 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.327 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.342 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.380 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.388 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576282 containerd[1703]: 2025-11-24 00:17:52.393 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.393 [INFO][4876] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.482 [INFO][4876] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.524 [INFO][4876] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.535 [INFO][4876] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.69/26] block=192.168.69.64/26 handle="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.535 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.69/26] handle="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.535 [INFO][4876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:52.576508 containerd[1703]: 2025-11-24 00:17:52.535 [INFO][4876] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.69/26] IPv6=[] ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" HandleID="k8s-pod-network.897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.538 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9df6de2d-6161-464f-b632-ed239b828bf6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"coredns-674b8bbfcf-6q9k7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83aa08f2910", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.538 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.69/32] ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.539 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83aa08f2910 ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.546 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.548 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9df6de2d-6161-464f-b632-ed239b828bf6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d", Pod:"coredns-674b8bbfcf-6q9k7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83aa08f2910", MAC:"4a:c4:c2:21:87:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:52.576663 containerd[1703]: 2025-11-24 00:17:52.571 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6q9k7" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--6q9k7-eth0" Nov 24 00:17:52.632611 containerd[1703]: time="2025-11-24T00:17:52.632565689Z" level=info msg="connecting to shim 897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d" address="unix:///run/containerd/s/a99e9f842a310972a03fa7fc845c6db8306d868849378901bd90b57119bdf51a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:52.666214 systemd[1]: Started cri-containerd-897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d.scope - libcontainer container 897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d. Nov 24 00:17:52.710268 containerd[1703]: time="2025-11-24T00:17:52.710207969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:52.714919 containerd[1703]: time="2025-11-24T00:17:52.714836931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:17:52.715128 containerd[1703]: time="2025-11-24T00:17:52.714862912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:17:52.716058 kubelet[3164]: E1124 00:17:52.716011 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:17:52.716157 kubelet[3164]: E1124 00:17:52.716062 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:17:52.717226 kubelet[3164]: E1124 00:17:52.716434 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:52.719286 containerd[1703]: time="2025-11-24T00:17:52.719087849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:17:52.727018 containerd[1703]: time="2025-11-24T00:17:52.726997236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6q9k7,Uid:9df6de2d-6161-464f-b632-ed239b828bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d\"" Nov 24 00:17:52.738644 containerd[1703]: time="2025-11-24T00:17:52.738615349Z" level=info msg="CreateContainer within sandbox \"897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:17:52.760723 containerd[1703]: time="2025-11-24T00:17:52.760251692Z" level=info msg="Container 0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:52.779441 containerd[1703]: time="2025-11-24T00:17:52.779407676Z" level=info msg="CreateContainer within sandbox \"897a4792394b27a70a36a6c7d478aac82ab71e849135208d90c580205631947d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210\"" Nov 24 00:17:52.780299 containerd[1703]: time="2025-11-24T00:17:52.780255361Z" level=info msg="StartContainer for \"0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210\"" Nov 24 00:17:52.781717 containerd[1703]: time="2025-11-24T00:17:52.781687980Z" level=info msg="connecting to shim 0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210" address="unix:///run/containerd/s/a99e9f842a310972a03fa7fc845c6db8306d868849378901bd90b57119bdf51a" protocol=ttrpc version=3 Nov 24 00:17:52.803071 systemd[1]: Started cri-containerd-0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210.scope - libcontainer container 0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210. Nov 24 00:17:52.834542 containerd[1703]: time="2025-11-24T00:17:52.834437476Z" level=info msg="StartContainer for \"0bdb65b64ab0fb97d6d530c4f87263d71490fd87bc42aeb94cad143f5a579210\" returns successfully" Nov 24 00:17:52.998800 containerd[1703]: time="2025-11-24T00:17:52.998750543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:53.002095 containerd[1703]: time="2025-11-24T00:17:53.002060941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:17:53.002160 containerd[1703]: time="2025-11-24T00:17:53.002151941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:17:53.002806 kubelet[3164]: E1124 00:17:53.002325 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:17:53.002806 kubelet[3164]: E1124 00:17:53.002379 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:17:53.002806 kubelet[3164]: E1124 00:17:53.002509 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:53.003990 kubelet[3164]: E1124 00:17:53.003940 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:53.112648 containerd[1703]: time="2025-11-24T00:17:53.112293690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fddc585d-2dpds,Uid:a5868a48-f0a4-49b1-9a5f-48199ea4ea4e,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:53.114393 containerd[1703]: time="2025-11-24T00:17:53.113004392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7htth,Uid:c88ea087-41f9-4607-b474-e4073ad22f81,Namespace:kube-system,Attempt:0,}" Nov 24 00:17:53.114393 containerd[1703]: time="2025-11-24T00:17:53.113368945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-cvhld,Uid:b4e92bfb-9155-4b18-ad04-f06b341ea73b,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:17:53.292027 systemd-networkd[1336]: cali68d87753024: Link UP Nov 24 00:17:53.293428 systemd-networkd[1336]: cali68d87753024: Gained carrier Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.194 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0 coredns-674b8bbfcf- kube-system c88ea087-41f9-4607-b474-e4073ad22f81 853 0 2025-11-24 00:17:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 coredns-674b8bbfcf-7htth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68d87753024 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.194 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.237 [INFO][5069] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" HandleID="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.237 [INFO][5069] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" HandleID="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-980c694365", "pod":"coredns-674b8bbfcf-7htth", "timestamp":"2025-11-24 00:17:53.237433284 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.237 [INFO][5069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.237 [INFO][5069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.237 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.244 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.251 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.254 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.256 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.259 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.259 [INFO][5069] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.260 [INFO][5069] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.268 [INFO][5069] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.279 [INFO][5069] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.70/26] block=192.168.69.64/26 handle="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.279 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.70/26] handle="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.279 [INFO][5069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:53.317631 containerd[1703]: 2025-11-24 00:17:53.279 [INFO][5069] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.70/26] IPv6=[] ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" HandleID="k8s-pod-network.cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Workload="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.283 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c88ea087-41f9-4607-b474-e4073ad22f81", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"coredns-674b8bbfcf-7htth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d87753024", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.285 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.70/32] ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.285 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68d87753024 ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.294 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.295 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c88ea087-41f9-4607-b474-e4073ad22f81", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb", Pod:"coredns-674b8bbfcf-7htth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d87753024", MAC:"a2:68:a2:e7:34:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.318778 containerd[1703]: 2025-11-24 00:17:53.315 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" Namespace="kube-system" Pod="coredns-674b8bbfcf-7htth" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-coredns--674b8bbfcf--7htth-eth0" Nov 24 00:17:53.331875 kubelet[3164]: E1124 00:17:53.331840 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:17:53.334076 kubelet[3164]: E1124 00:17:53.334041 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:53.369490 containerd[1703]: time="2025-11-24T00:17:53.369001443Z" level=info msg="connecting to shim cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb" address="unix:///run/containerd/s/bd0ebc097442889e52f8b1a12198372105c35b678b21ef2f48deaed1d74e4c05" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:53.391226 systemd[1]: Started cri-containerd-cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb.scope - libcontainer container cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb. Nov 24 00:17:53.436771 containerd[1703]: time="2025-11-24T00:17:53.436244312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7htth,Uid:c88ea087-41f9-4607-b474-e4073ad22f81,Namespace:kube-system,Attempt:0,} returns sandbox id \"cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb\"" Nov 24 00:17:53.445198 systemd-networkd[1336]: calid7ac7c36371: Gained IPv6LL Nov 24 00:17:53.451783 containerd[1703]: time="2025-11-24T00:17:53.451670504Z" level=info msg="CreateContainer within sandbox \"cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:17:53.476315 containerd[1703]: time="2025-11-24T00:17:53.476285710Z" level=info msg="Container 05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:17:53.486614 kubelet[3164]: I1124 00:17:53.486262 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6q9k7" podStartSLOduration=43.486243609 podStartE2EDuration="43.486243609s" podCreationTimestamp="2025-11-24 00:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:53.351180527 +0000 UTC m=+48.324201937" watchObservedRunningTime="2025-11-24 00:17:53.486243609 +0000 UTC m=+48.459265023" Nov 24 00:17:53.492071 containerd[1703]: time="2025-11-24T00:17:53.492040335Z" level=info msg="CreateContainer within sandbox \"cefcf7a98aebde5e55c9a61b0ac6959527bb9161cee6bb14a1a0c84683053bbb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701\"" Nov 24 00:17:53.492548 containerd[1703]: time="2025-11-24T00:17:53.492531213Z" level=info msg="StartContainer for \"05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701\"" Nov 24 00:17:53.493409 containerd[1703]: time="2025-11-24T00:17:53.493346866Z" level=info msg="connecting to shim 05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701" address="unix:///run/containerd/s/bd0ebc097442889e52f8b1a12198372105c35b678b21ef2f48deaed1d74e4c05" protocol=ttrpc version=3 Nov 24 00:17:53.510056 systemd[1]: Started cri-containerd-05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701.scope - libcontainer container 05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701. Nov 24 00:17:53.546299 containerd[1703]: time="2025-11-24T00:17:53.546098154Z" level=info msg="StartContainer for \"05eefc429b553bb2316a64d6dc9ffe9c064868080f2c1f4e934b0ddb06d92701\" returns successfully" Nov 24 00:17:53.788357 systemd-networkd[1336]: calid98713663b2: Link UP Nov 24 00:17:53.788560 systemd-networkd[1336]: calid98713663b2: Gained carrier Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.201 [INFO][5052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0 calico-apiserver-869ddb6fcd- calico-apiserver b4e92bfb-9155-4b18-ad04-f06b341ea73b 850 0 2025-11-24 00:17:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:869ddb6fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 calico-apiserver-869ddb6fcd-cvhld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid98713663b2 [] [] }} ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.202 [INFO][5052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.261 [INFO][5075] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" HandleID="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.265 [INFO][5075] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" HandleID="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-980c694365", "pod":"calico-apiserver-869ddb6fcd-cvhld", "timestamp":"2025-11-24 00:17:53.261690458 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.265 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.280 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.280 [INFO][5075] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.345 [INFO][5075] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.450 [INFO][5075] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.529 [INFO][5075] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.575 [INFO][5075] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.620 [INFO][5075] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.621 [INFO][5075] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.626 [INFO][5075] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.675 [INFO][5075] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.779 [INFO][5075] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.71/26] block=192.168.69.64/26 handle="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.779 [INFO][5075] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.71/26] handle="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.780 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:53.824622 containerd[1703]: 2025-11-24 00:17:53.780 [INFO][5075] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.71/26] IPv6=[] ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" HandleID="k8s-pod-network.d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.783 [INFO][5052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0", GenerateName:"calico-apiserver-869ddb6fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4e92bfb-9155-4b18-ad04-f06b341ea73b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869ddb6fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"calico-apiserver-869ddb6fcd-cvhld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98713663b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.783 [INFO][5052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.71/32] ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.783 [INFO][5052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid98713663b2 ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.787 [INFO][5052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.791 [INFO][5052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0", GenerateName:"calico-apiserver-869ddb6fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4e92bfb-9155-4b18-ad04-f06b341ea73b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869ddb6fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b", Pod:"calico-apiserver-869ddb6fcd-cvhld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid98713663b2", MAC:"fe:e8:e9:12:84:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.826267 containerd[1703]: 2025-11-24 00:17:53.821 [INFO][5052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" Namespace="calico-apiserver" Pod="calico-apiserver-869ddb6fcd-cvhld" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--869ddb6fcd--cvhld-eth0" Nov 24 00:17:53.865520 systemd-networkd[1336]: calib43a7ede0e0: Link UP Nov 24 00:17:53.867526 systemd-networkd[1336]: calib43a7ede0e0: Gained carrier Nov 24 00:17:53.883387 containerd[1703]: time="2025-11-24T00:17:53.883335754Z" level=info msg="connecting to shim d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b" address="unix:///run/containerd/s/4bdbdc42adea0d86d8e556e370ff2c9a2ad7c3a1c2dc4e25965c75e9cb6882f6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:53.911045 systemd[1]: Started cri-containerd-d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b.scope - libcontainer container d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b. Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.224 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0 calico-apiserver-78fddc585d- calico-apiserver a5868a48-f0a4-49b1-9a5f-48199ea4ea4e 855 0 2025-11-24 00:17:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78fddc585d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 calico-apiserver-78fddc585d-2dpds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib43a7ede0e0 [] [] }} ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.224 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.273 [INFO][5081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" HandleID="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.273 [INFO][5081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" HandleID="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-980c694365", "pod":"calico-apiserver-78fddc585d-2dpds", "timestamp":"2025-11-24 00:17:53.273636633 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.273 [INFO][5081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.780 [INFO][5081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.780 [INFO][5081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.797 [INFO][5081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.804 [INFO][5081] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.825 [INFO][5081] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.829 [INFO][5081] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.832 [INFO][5081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.832 [INFO][5081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.833 [INFO][5081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.843 [INFO][5081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.856 [INFO][5081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.72/26] block=192.168.69.64/26 handle="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.856 [INFO][5081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.72/26] handle="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.856 [INFO][5081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:53.927510 containerd[1703]: 2025-11-24 00:17:53.856 [INFO][5081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.72/26] IPv6=[] ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" HandleID="k8s-pod-network.82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Workload="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.860 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0", GenerateName:"calico-apiserver-78fddc585d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5868a48-f0a4-49b1-9a5f-48199ea4ea4e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78fddc585d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"calico-apiserver-78fddc585d-2dpds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib43a7ede0e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.860 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.72/32] ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.860 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib43a7ede0e0 ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.869 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.869 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0", GenerateName:"calico-apiserver-78fddc585d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5868a48-f0a4-49b1-9a5f-48199ea4ea4e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78fddc585d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c", Pod:"calico-apiserver-78fddc585d-2dpds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib43a7ede0e0", MAC:"0e:83:cc:d0:4e:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:53.928135 containerd[1703]: 2025-11-24 00:17:53.923 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" Namespace="calico-apiserver" Pod="calico-apiserver-78fddc585d-2dpds" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-calico--apiserver--78fddc585d--2dpds-eth0" Nov 24 00:17:53.978922 containerd[1703]: time="2025-11-24T00:17:53.978776302Z" level=info msg="connecting to shim 82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c" address="unix:///run/containerd/s/3456f697f4dbeca17ca85232ca21a443f0c478c53a4b17c1696e844106714655" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:54.010104 systemd[1]: Started cri-containerd-82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c.scope - libcontainer container 82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c. Nov 24 00:17:54.021019 systemd-networkd[1336]: cali83aa08f2910: Gained IPv6LL Nov 24 00:17:54.028680 containerd[1703]: time="2025-11-24T00:17:54.028635802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869ddb6fcd-cvhld,Uid:b4e92bfb-9155-4b18-ad04-f06b341ea73b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d2d47adfa65d908a0e59d1bdae595bbab673350187439dce7db0a257ac50104b\"" Nov 24 00:17:54.031841 containerd[1703]: time="2025-11-24T00:17:54.031724148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:17:54.097125 containerd[1703]: time="2025-11-24T00:17:54.096339349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78fddc585d-2dpds,Uid:a5868a48-f0a4-49b1-9a5f-48199ea4ea4e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82e55aa3ce8ccdc8eecf8b1667e663a94c5ef38ec305e75214ce37d183607f7c\"" Nov 24 00:17:54.300702 containerd[1703]: time="2025-11-24T00:17:54.300659531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:54.304037 containerd[1703]: time="2025-11-24T00:17:54.304002764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:17:54.304233 containerd[1703]: time="2025-11-24T00:17:54.304007607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:17:54.304266 kubelet[3164]: E1124 00:17:54.304233 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:54.304305 kubelet[3164]: E1124 00:17:54.304288 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:54.304697 kubelet[3164]: E1124 00:17:54.304565 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4gpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:54.304955 containerd[1703]: time="2025-11-24T00:17:54.304638508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:17:54.306249 kubelet[3164]: E1124 00:17:54.306213 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:17:54.337724 kubelet[3164]: E1124 00:17:54.337289 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:17:54.340342 kubelet[3164]: E1124 00:17:54.340273 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:17:54.357116 kubelet[3164]: I1124 00:17:54.356133 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7htth" podStartSLOduration=44.356117308 podStartE2EDuration="44.356117308s" podCreationTimestamp="2025-11-24 00:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:17:54.355388266 +0000 UTC m=+49.328409679" watchObservedRunningTime="2025-11-24 00:17:54.356117308 +0000 UTC m=+49.329138722" Nov 24 00:17:54.582271 containerd[1703]: time="2025-11-24T00:17:54.582087197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:54.586069 containerd[1703]: time="2025-11-24T00:17:54.586005137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:17:54.586351 containerd[1703]: time="2025-11-24T00:17:54.586148251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:17:54.586535 kubelet[3164]: E1124 00:17:54.586497 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:54.586596 kubelet[3164]: E1124 00:17:54.586548 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:17:54.586952 kubelet[3164]: E1124 00:17:54.586697 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkd6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:54.587919 kubelet[3164]: E1124 00:17:54.587866 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:17:54.790209 systemd-networkd[1336]: cali68d87753024: Gained IPv6LL Nov 24 00:17:55.113002 containerd[1703]: time="2025-11-24T00:17:55.112817590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8qltp,Uid:e06c5900-d0dc-4011-934f-01926c96ebe8,Namespace:calico-system,Attempt:0,}" Nov 24 00:17:55.219010 systemd-networkd[1336]: cali2fbd8587e3a: Link UP Nov 24 00:17:55.219201 systemd-networkd[1336]: cali2fbd8587e3a: Gained carrier Nov 24 00:17:55.237132 systemd-networkd[1336]: calid98713663b2: Gained IPv6LL Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.149 [INFO][5301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0 goldmane-666569f655- calico-system e06c5900-d0dc-4011-934f-01926c96ebe8 851 0 2025-11-24 00:17:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-a-980c694365 goldmane-666569f655-8qltp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2fbd8587e3a [] [] }} ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.149 [INFO][5301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.172 [INFO][5314] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" HandleID="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Workload="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.172 [INFO][5314] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" HandleID="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Workload="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-980c694365", "pod":"goldmane-666569f655-8qltp", "timestamp":"2025-11-24 00:17:55.172278845 +0000 UTC"}, Hostname:"ci-4459.2.1-a-980c694365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.172 [INFO][5314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.172 [INFO][5314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.172 [INFO][5314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-980c694365' Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.181 [INFO][5314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.187 [INFO][5314] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.191 [INFO][5314] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.192 [INFO][5314] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.194 [INFO][5314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.194 [INFO][5314] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.196 [INFO][5314] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864 Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.200 [INFO][5314] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.214 [INFO][5314] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.73/26] block=192.168.69.64/26 handle="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.214 [INFO][5314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.73/26] handle="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" host="ci-4459.2.1-a-980c694365" Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.214 [INFO][5314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:17:55.240361 containerd[1703]: 2025-11-24 00:17:55.214 [INFO][5314] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.73/26] IPv6=[] ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" HandleID="k8s-pod-network.cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Workload="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.216 [INFO][5301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e06c5900-d0dc-4011-934f-01926c96ebe8", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"", Pod:"goldmane-666569f655-8qltp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2fbd8587e3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.216 [INFO][5301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.73/32] ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.216 [INFO][5301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fbd8587e3a ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.219 [INFO][5301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.220 [INFO][5301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e06c5900-d0dc-4011-934f-01926c96ebe8", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-980c694365", ContainerID:"cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864", Pod:"goldmane-666569f655-8qltp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2fbd8587e3a", MAC:"0e:a0:7f:b4:64:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:17:55.241119 containerd[1703]: 2025-11-24 00:17:55.235 [INFO][5301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" Namespace="calico-system" Pod="goldmane-666569f655-8qltp" WorkloadEndpoint="ci--4459.2.1--a--980c694365-k8s-goldmane--666569f655--8qltp-eth0" Nov 24 00:17:55.301087 systemd-networkd[1336]: calib43a7ede0e0: Gained IPv6LL Nov 24 00:17:55.303309 containerd[1703]: time="2025-11-24T00:17:55.303264271Z" level=info msg="connecting to shim cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864" address="unix:///run/containerd/s/f1114f675caee6f9eca427d9dab8c2d6a565b0a1d6abf839f35126de150798d4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:17:55.333065 systemd[1]: Started cri-containerd-cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864.scope - libcontainer container cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864. Nov 24 00:17:55.341835 kubelet[3164]: E1124 00:17:55.341785 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:17:55.343211 kubelet[3164]: E1124 00:17:55.342887 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:17:55.391865 containerd[1703]: time="2025-11-24T00:17:55.391748473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8qltp,Uid:e06c5900-d0dc-4011-934f-01926c96ebe8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb4bb7eba6b2d930a1ab6a68ae4fc4bd2871513a50e4546f53e0cfd720c75864\"" Nov 24 00:17:55.396062 containerd[1703]: time="2025-11-24T00:17:55.396031852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:17:55.661864 containerd[1703]: time="2025-11-24T00:17:55.661681998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:17:55.665400 containerd[1703]: time="2025-11-24T00:17:55.665181665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:17:55.665501 containerd[1703]: time="2025-11-24T00:17:55.665452229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:17:55.665743 kubelet[3164]: E1124 00:17:55.665704 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:17:55.665809 kubelet[3164]: E1124 00:17:55.665757 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:17:55.666012 kubelet[3164]: E1124 00:17:55.665940 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:17:55.667809 kubelet[3164]: E1124 00:17:55.667734 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:17:56.344835 kubelet[3164]: E1124 00:17:56.344693 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:17:57.093687 systemd-networkd[1336]: cali2fbd8587e3a: Gained IPv6LL Nov 24 00:17:57.346853 kubelet[3164]: E1124 00:17:57.346723 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:00.111492 containerd[1703]: time="2025-11-24T00:18:00.111448579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:18:00.392231 containerd[1703]: time="2025-11-24T00:18:00.391739762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:00.398086 containerd[1703]: time="2025-11-24T00:18:00.398037001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:18:00.398188 containerd[1703]: time="2025-11-24T00:18:00.398143255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:18:00.398571 kubelet[3164]: E1124 00:18:00.398339 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:00.398571 kubelet[3164]: E1124 00:18:00.398406 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:00.399423 kubelet[3164]: E1124 00:18:00.398984 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd95a4842830464d80326ed16998cd18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:00.401460 containerd[1703]: time="2025-11-24T00:18:00.401434137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:18:00.667482 containerd[1703]: time="2025-11-24T00:18:00.667349536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:00.671142 containerd[1703]: time="2025-11-24T00:18:00.671091490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:18:00.671269 containerd[1703]: time="2025-11-24T00:18:00.671193925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:18:00.671509 kubelet[3164]: E1124 00:18:00.671447 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:00.671560 kubelet[3164]: E1124 00:18:00.671519 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:00.671700 kubelet[3164]: E1124 00:18:00.671651 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:00.673331 kubelet[3164]: E1124 00:18:00.673253 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:18:04.112742 containerd[1703]: time="2025-11-24T00:18:04.112611890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:04.385607 containerd[1703]: time="2025-11-24T00:18:04.385469184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:04.389300 containerd[1703]: time="2025-11-24T00:18:04.389265113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:04.389362 containerd[1703]: time="2025-11-24T00:18:04.389341290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:04.389506 kubelet[3164]: E1124 00:18:04.389464 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:04.389805 kubelet[3164]: E1124 00:18:04.389516 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:04.389805 kubelet[3164]: E1124 00:18:04.389671 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxjkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:04.391925 kubelet[3164]: E1124 00:18:04.391294 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:18:05.112885 containerd[1703]: time="2025-11-24T00:18:05.112225277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:18:05.387829 containerd[1703]: time="2025-11-24T00:18:05.387695236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:05.391006 containerd[1703]: time="2025-11-24T00:18:05.390959030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:18:05.391006 containerd[1703]: time="2025-11-24T00:18:05.390988428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:18:05.391180 kubelet[3164]: E1124 00:18:05.391148 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:18:05.391473 kubelet[3164]: E1124 00:18:05.391195 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:18:05.391473 kubelet[3164]: E1124 00:18:05.391348 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvlqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:05.392947 kubelet[3164]: E1124 00:18:05.392886 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:18:07.113076 containerd[1703]: time="2025-11-24T00:18:07.112960580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:07.378336 containerd[1703]: time="2025-11-24T00:18:07.378160698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:07.381461 containerd[1703]: time="2025-11-24T00:18:07.381372992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:07.381461 containerd[1703]: time="2025-11-24T00:18:07.381399429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:07.381639 kubelet[3164]: E1124 00:18:07.381601 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:07.381945 kubelet[3164]: E1124 00:18:07.381649 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:07.381945 kubelet[3164]: E1124 00:18:07.381797 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkd6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:07.383453 kubelet[3164]: E1124 00:18:07.383379 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:18:08.111765 containerd[1703]: time="2025-11-24T00:18:08.111507155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:08.377988 containerd[1703]: time="2025-11-24T00:18:08.377834808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:08.381208 containerd[1703]: time="2025-11-24T00:18:08.381169123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:08.381283 containerd[1703]: time="2025-11-24T00:18:08.381241770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:08.381448 kubelet[3164]: E1124 00:18:08.381413 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:08.381506 kubelet[3164]: E1124 00:18:08.381460 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:08.381651 kubelet[3164]: E1124 00:18:08.381611 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4gpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:08.382932 kubelet[3164]: E1124 00:18:08.382878 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:18:09.113839 containerd[1703]: time="2025-11-24T00:18:09.113796256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:18:09.377572 containerd[1703]: time="2025-11-24T00:18:09.376863219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:09.381321 containerd[1703]: time="2025-11-24T00:18:09.381260420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:18:09.382008 containerd[1703]: time="2025-11-24T00:18:09.381265723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:18:09.382034 kubelet[3164]: E1124 00:18:09.381472 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:18:09.382034 kubelet[3164]: E1124 00:18:09.381521 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:18:09.382034 kubelet[3164]: E1124 00:18:09.381668 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:09.384710 containerd[1703]: time="2025-11-24T00:18:09.384683289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:18:09.644188 containerd[1703]: time="2025-11-24T00:18:09.644049526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:09.647232 containerd[1703]: time="2025-11-24T00:18:09.647177595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:18:09.647393 containerd[1703]: time="2025-11-24T00:18:09.647197517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:18:09.647460 kubelet[3164]: E1124 00:18:09.647423 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:18:09.647517 kubelet[3164]: E1124 00:18:09.647481 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:18:09.647696 kubelet[3164]: E1124 00:18:09.647657 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:09.648857 kubelet[3164]: E1124 00:18:09.648806 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:18:12.111772 containerd[1703]: time="2025-11-24T00:18:12.111511674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:18:12.399015 containerd[1703]: time="2025-11-24T00:18:12.398867718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:12.401972 containerd[1703]: time="2025-11-24T00:18:12.401919578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:18:12.402155 containerd[1703]: time="2025-11-24T00:18:12.401934437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:12.402212 kubelet[3164]: E1124 00:18:12.402165 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:18:12.402494 kubelet[3164]: E1124 00:18:12.402221 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:18:12.402494 kubelet[3164]: E1124 00:18:12.402397 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:12.403691 kubelet[3164]: E1124 00:18:12.403628 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:14.116521 kubelet[3164]: E1124 00:18:14.115975 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:18:17.112785 kubelet[3164]: E1124 00:18:17.112139 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:18:20.113272 kubelet[3164]: E1124 00:18:20.113201 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:18:22.111704 kubelet[3164]: E1124 00:18:22.111302 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:18:23.112715 kubelet[3164]: E1124 00:18:23.112363 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:18:24.112767 kubelet[3164]: E1124 00:18:24.112726 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:24.113989 kubelet[3164]: E1124 00:18:24.112866 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:18:27.113917 containerd[1703]: time="2025-11-24T00:18:27.113862780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:18:27.376482 containerd[1703]: time="2025-11-24T00:18:27.375971857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:27.380225 containerd[1703]: time="2025-11-24T00:18:27.380094391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:18:27.380225 containerd[1703]: time="2025-11-24T00:18:27.380197412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:18:27.380581 kubelet[3164]: E1124 00:18:27.380528 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:27.381546 kubelet[3164]: E1124 00:18:27.380972 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:18:27.381546 kubelet[3164]: E1124 00:18:27.381135 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd95a4842830464d80326ed16998cd18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:27.383867 containerd[1703]: time="2025-11-24T00:18:27.383843860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:18:27.649739 containerd[1703]: time="2025-11-24T00:18:27.649605032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:27.654113 containerd[1703]: time="2025-11-24T00:18:27.654048437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:18:27.654113 containerd[1703]: time="2025-11-24T00:18:27.654086605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:18:27.656064 kubelet[3164]: E1124 00:18:27.654292 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:27.656064 kubelet[3164]: E1124 00:18:27.654347 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:18:27.656064 kubelet[3164]: E1124 00:18:27.654483 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:27.656064 kubelet[3164]: E1124 00:18:27.655994 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:18:30.114003 containerd[1703]: time="2025-11-24T00:18:30.113233579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:18:30.404565 containerd[1703]: time="2025-11-24T00:18:30.404435908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:30.407795 containerd[1703]: time="2025-11-24T00:18:30.407747606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:18:30.407920 containerd[1703]: time="2025-11-24T00:18:30.407855294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:18:30.408096 kubelet[3164]: E1124 00:18:30.408056 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:18:30.408383 kubelet[3164]: E1124 00:18:30.408114 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:18:30.408383 kubelet[3164]: E1124 00:18:30.408275 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvlqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:30.410068 kubelet[3164]: E1124 00:18:30.410026 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:18:33.115349 containerd[1703]: time="2025-11-24T00:18:33.115203579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:33.380318 containerd[1703]: time="2025-11-24T00:18:33.380117295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:33.384311 containerd[1703]: time="2025-11-24T00:18:33.384259230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:33.384518 containerd[1703]: time="2025-11-24T00:18:33.384259100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:33.384583 kubelet[3164]: E1124 00:18:33.384538 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:33.384891 kubelet[3164]: E1124 00:18:33.384592 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:33.384891 kubelet[3164]: E1124 00:18:33.384750 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxjkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:33.385920 kubelet[3164]: E1124 00:18:33.385856 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:18:35.118604 containerd[1703]: time="2025-11-24T00:18:35.118362500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:18:35.397755 containerd[1703]: time="2025-11-24T00:18:35.397625860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:35.400985 containerd[1703]: time="2025-11-24T00:18:35.400919521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:18:35.401113 containerd[1703]: time="2025-11-24T00:18:35.400891109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:18:35.401267 kubelet[3164]: E1124 00:18:35.401204 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:18:35.401544 kubelet[3164]: E1124 00:18:35.401283 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:18:35.401544 kubelet[3164]: E1124 00:18:35.401443 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:35.402291 containerd[1703]: time="2025-11-24T00:18:35.402263246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:18:35.662189 containerd[1703]: time="2025-11-24T00:18:35.661877496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:35.665386 containerd[1703]: time="2025-11-24T00:18:35.665280549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:18:35.666223 containerd[1703]: time="2025-11-24T00:18:35.665311717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:35.666633 kubelet[3164]: E1124 00:18:35.666586 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:18:35.666865 kubelet[3164]: E1124 00:18:35.666843 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:18:35.667346 containerd[1703]: time="2025-11-24T00:18:35.667321271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:35.667753 kubelet[3164]: E1124 00:18:35.667697 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:35.669734 kubelet[3164]: E1124 00:18:35.669695 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:35.941801 containerd[1703]: time="2025-11-24T00:18:35.941670000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:35.947397 containerd[1703]: time="2025-11-24T00:18:35.947090133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:35.947397 containerd[1703]: time="2025-11-24T00:18:35.947203984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:35.947562 kubelet[3164]: E1124 00:18:35.947448 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:35.947562 kubelet[3164]: E1124 00:18:35.947515 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:35.947845 kubelet[3164]: E1124 00:18:35.947797 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkd6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:35.948615 containerd[1703]: time="2025-11-24T00:18:35.948581321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:18:35.949082 kubelet[3164]: E1124 00:18:35.949045 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:18:36.208448 containerd[1703]: time="2025-11-24T00:18:36.208307622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:36.212119 containerd[1703]: time="2025-11-24T00:18:36.212071409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:18:36.212232 containerd[1703]: time="2025-11-24T00:18:36.212180951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:18:36.212449 kubelet[3164]: E1124 00:18:36.212405 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:18:36.212508 kubelet[3164]: E1124 00:18:36.212482 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:18:36.212789 kubelet[3164]: E1124 00:18:36.212737 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:36.213142 containerd[1703]: time="2025-11-24T00:18:36.213119572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:18:36.214216 kubelet[3164]: E1124 00:18:36.214139 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:18:36.476594 containerd[1703]: time="2025-11-24T00:18:36.476539199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:18:36.479918 containerd[1703]: time="2025-11-24T00:18:36.479846073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:18:36.480073 containerd[1703]: time="2025-11-24T00:18:36.479964702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:18:36.480180 kubelet[3164]: E1124 00:18:36.480143 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:36.480469 kubelet[3164]: E1124 00:18:36.480201 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:18:36.480469 kubelet[3164]: E1124 00:18:36.480360 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4gpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:18:36.481996 kubelet[3164]: E1124 00:18:36.481879 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:18:38.114587 kubelet[3164]: E1124 00:18:38.114194 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:18:43.115646 kubelet[3164]: E1124 00:18:43.115209 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:18:47.111600 kubelet[3164]: E1124 00:18:47.111557 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:48.114664 kubelet[3164]: E1124 00:18:48.114431 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:18:49.113449 kubelet[3164]: E1124 00:18:49.113406 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:18:50.113430 kubelet[3164]: E1124 00:18:50.113361 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:18:50.115434 kubelet[3164]: E1124 00:18:50.113861 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:18:51.117382 kubelet[3164]: E1124 00:18:51.117141 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:18:52.489847 systemd[1]: Started sshd@7-10.200.4.36:22-10.200.16.10:48998.service - OpenSSH per-connection server daemon (10.200.16.10:48998). Nov 24 00:18:53.097998 sshd[5475]: Accepted publickey for core from 10.200.16.10 port 48998 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:18:53.098716 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:18:53.105065 systemd-logind[1682]: New session 10 of user core. Nov 24 00:18:53.111538 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:18:53.585114 sshd[5478]: Connection closed by 10.200.16.10 port 48998 Nov 24 00:18:53.585705 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Nov 24 00:18:53.589148 systemd[1]: sshd@7-10.200.4.36:22-10.200.16.10:48998.service: Deactivated successfully. Nov 24 00:18:53.591189 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:18:53.592732 systemd-logind[1682]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:18:53.595934 systemd-logind[1682]: Removed session 10. Nov 24 00:18:56.113268 kubelet[3164]: E1124 00:18:56.113219 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:18:58.703697 systemd[1]: Started sshd@8-10.200.4.36:22-10.200.16.10:49010.service - OpenSSH per-connection server daemon (10.200.16.10:49010). Nov 24 00:18:59.116521 kubelet[3164]: E1124 00:18:59.115867 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:18:59.328608 sshd[5491]: Accepted publickey for core from 10.200.16.10 port 49010 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:18:59.330097 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:18:59.335578 systemd-logind[1682]: New session 11 of user core. Nov 24 00:18:59.343074 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:18:59.830048 sshd[5494]: Connection closed by 10.200.16.10 port 49010 Nov 24 00:18:59.830403 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Nov 24 00:18:59.834045 systemd[1]: sshd@8-10.200.4.36:22-10.200.16.10:49010.service: Deactivated successfully. Nov 24 00:18:59.836317 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:18:59.837117 systemd-logind[1682]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:18:59.838494 systemd-logind[1682]: Removed session 11. Nov 24 00:19:00.113314 kubelet[3164]: E1124 00:19:00.113176 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:19:01.113663 kubelet[3164]: E1124 00:19:01.113611 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:19:02.113466 kubelet[3164]: E1124 00:19:02.113393 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:19:03.113909 kubelet[3164]: E1124 00:19:03.113856 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:19:04.966055 systemd[1]: Started sshd@9-10.200.4.36:22-10.200.16.10:35874.service - OpenSSH per-connection server daemon (10.200.16.10:35874). Nov 24 00:19:05.136053 kubelet[3164]: E1124 00:19:05.133345 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:19:05.571542 sshd[5507]: Accepted publickey for core from 10.200.16.10 port 35874 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:05.573993 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:05.582311 systemd-logind[1682]: New session 12 of user core. Nov 24 00:19:05.588466 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:19:06.121338 sshd[5512]: Connection closed by 10.200.16.10 port 35874 Nov 24 00:19:06.122528 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:06.126206 systemd[1]: sshd@9-10.200.4.36:22-10.200.16.10:35874.service: Deactivated successfully. Nov 24 00:19:06.128278 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:19:06.128983 systemd-logind[1682]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:19:06.130433 systemd-logind[1682]: Removed session 12. Nov 24 00:19:06.236267 systemd[1]: Started sshd@10-10.200.4.36:22-10.200.16.10:35880.service - OpenSSH per-connection server daemon (10.200.16.10:35880). Nov 24 00:19:06.828180 sshd[5525]: Accepted publickey for core from 10.200.16.10 port 35880 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:06.830220 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:06.839136 systemd-logind[1682]: New session 13 of user core. Nov 24 00:19:06.844098 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:19:07.114711 kubelet[3164]: E1124 00:19:07.114361 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:19:07.448133 sshd[5528]: Connection closed by 10.200.16.10 port 35880 Nov 24 00:19:07.450514 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:07.457408 systemd[1]: sshd@10-10.200.4.36:22-10.200.16.10:35880.service: Deactivated successfully. Nov 24 00:19:07.460381 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:19:07.461672 systemd-logind[1682]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:19:07.464540 systemd-logind[1682]: Removed session 13. Nov 24 00:19:07.557161 systemd[1]: Started sshd@11-10.200.4.36:22-10.200.16.10:35894.service - OpenSSH per-connection server daemon (10.200.16.10:35894). Nov 24 00:19:08.176205 sshd[5541]: Accepted publickey for core from 10.200.16.10 port 35894 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:08.177717 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:08.182072 systemd-logind[1682]: New session 14 of user core. Nov 24 00:19:08.185041 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:19:08.690708 sshd[5545]: Connection closed by 10.200.16.10 port 35894 Nov 24 00:19:08.691562 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:08.699239 systemd[1]: sshd@11-10.200.4.36:22-10.200.16.10:35894.service: Deactivated successfully. Nov 24 00:19:08.703556 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:19:08.705749 systemd-logind[1682]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:19:08.710394 systemd-logind[1682]: Removed session 14. Nov 24 00:19:12.112691 kubelet[3164]: E1124 00:19:12.112641 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:19:13.112405 containerd[1703]: time="2025-11-24T00:19:13.112309486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:19:13.400430 containerd[1703]: time="2025-11-24T00:19:13.400227640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:13.404918 containerd[1703]: time="2025-11-24T00:19:13.404813769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:19:13.405120 containerd[1703]: time="2025-11-24T00:19:13.405052788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:19:13.405371 kubelet[3164]: E1124 00:19:13.405327 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:13.406977 kubelet[3164]: E1124 00:19:13.405700 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:19:13.406977 kubelet[3164]: E1124 00:19:13.405840 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd95a4842830464d80326ed16998cd18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:13.408130 containerd[1703]: time="2025-11-24T00:19:13.408099143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:19:13.682689 containerd[1703]: time="2025-11-24T00:19:13.682288118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:13.686289 containerd[1703]: time="2025-11-24T00:19:13.686166780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:19:13.686289 containerd[1703]: time="2025-11-24T00:19:13.686259440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:13.686717 kubelet[3164]: E1124 00:19:13.686636 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:13.686717 kubelet[3164]: E1124 00:19:13.686700 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:19:13.687270 kubelet[3164]: E1124 00:19:13.687135 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56wz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbb9bbbc6-vdbzq_calico-system(0938abdc-cc2b-4018-9eeb-6e2be7bfa61a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:13.688486 kubelet[3164]: E1124 00:19:13.688438 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:19:13.802951 systemd[1]: Started sshd@12-10.200.4.36:22-10.200.16.10:38352.service - OpenSSH per-connection server daemon (10.200.16.10:38352). Nov 24 00:19:14.113091 kubelet[3164]: E1124 00:19:14.112987 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:19:14.113841 containerd[1703]: time="2025-11-24T00:19:14.113653842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:14.377148 containerd[1703]: time="2025-11-24T00:19:14.376856056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:14.380150 containerd[1703]: time="2025-11-24T00:19:14.380114192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:14.380216 containerd[1703]: time="2025-11-24T00:19:14.380200992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:14.380627 kubelet[3164]: E1124 00:19:14.380348 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:14.380627 kubelet[3164]: E1124 00:19:14.380409 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:14.380627 kubelet[3164]: E1124 00:19:14.380556 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxjkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-pdbfs_calico-apiserver(17c95f35-9a12-4372-90d3-ee8b8cc1e636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:14.381999 kubelet[3164]: E1124 00:19:14.381965 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:19:14.406942 sshd[5565]: Accepted publickey for core from 10.200.16.10 port 38352 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:14.407518 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:14.411985 systemd-logind[1682]: New session 15 of user core. Nov 24 00:19:14.417059 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:19:14.892975 sshd[5568]: Connection closed by 10.200.16.10 port 38352 Nov 24 00:19:14.895022 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:14.899866 systemd-logind[1682]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:19:14.900341 systemd[1]: sshd@12-10.200.4.36:22-10.200.16.10:38352.service: Deactivated successfully. Nov 24 00:19:14.903916 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:19:14.906378 systemd-logind[1682]: Removed session 15. Nov 24 00:19:15.008156 systemd[1]: Started sshd@13-10.200.4.36:22-10.200.16.10:38368.service - OpenSSH per-connection server daemon (10.200.16.10:38368). Nov 24 00:19:15.608093 sshd[5580]: Accepted publickey for core from 10.200.16.10 port 38368 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:15.610509 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:15.616439 systemd-logind[1682]: New session 16 of user core. Nov 24 00:19:15.624212 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:19:16.111489 containerd[1703]: time="2025-11-24T00:19:16.111201898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:19:16.116396 sshd[5585]: Connection closed by 10.200.16.10 port 38368 Nov 24 00:19:16.116748 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:16.120709 systemd-logind[1682]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:19:16.120995 systemd[1]: sshd@13-10.200.4.36:22-10.200.16.10:38368.service: Deactivated successfully. Nov 24 00:19:16.123576 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:19:16.127648 systemd-logind[1682]: Removed session 16. Nov 24 00:19:16.231669 systemd[1]: Started sshd@14-10.200.4.36:22-10.200.16.10:38374.service - OpenSSH per-connection server daemon (10.200.16.10:38374). Nov 24 00:19:16.381105 containerd[1703]: time="2025-11-24T00:19:16.380968781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:16.384484 containerd[1703]: time="2025-11-24T00:19:16.384447529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:19:16.384558 containerd[1703]: time="2025-11-24T00:19:16.384538393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:19:16.384744 kubelet[3164]: E1124 00:19:16.384705 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:16.385115 kubelet[3164]: E1124 00:19:16.384760 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:19:16.385115 kubelet[3164]: E1124 00:19:16.384933 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:16.387201 containerd[1703]: time="2025-11-24T00:19:16.387144267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:19:16.648280 containerd[1703]: time="2025-11-24T00:19:16.648035676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:16.651347 containerd[1703]: time="2025-11-24T00:19:16.651307147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:19:16.651465 containerd[1703]: time="2025-11-24T00:19:16.651310667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:19:16.651817 kubelet[3164]: E1124 00:19:16.651599 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:16.651817 kubelet[3164]: E1124 00:19:16.651680 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:19:16.652976 kubelet[3164]: E1124 00:19:16.651844 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srpxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jtqbh_calico-system(84288287-c520-476c-9981-2956ccc0c1dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:16.653302 kubelet[3164]: E1124 00:19:16.653260 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:19:16.834912 sshd[5595]: Accepted publickey for core from 10.200.16.10 port 38374 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:16.836451 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:16.841293 systemd-logind[1682]: New session 17 of user core. Nov 24 00:19:16.848147 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:19:17.737390 sshd[5598]: Connection closed by 10.200.16.10 port 38374 Nov 24 00:19:17.738098 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:17.743088 systemd[1]: sshd@14-10.200.4.36:22-10.200.16.10:38374.service: Deactivated successfully. Nov 24 00:19:17.746118 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:19:17.747530 systemd-logind[1682]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:19:17.748993 systemd-logind[1682]: Removed session 17. Nov 24 00:19:17.843511 systemd[1]: Started sshd@15-10.200.4.36:22-10.200.16.10:38388.service - OpenSSH per-connection server daemon (10.200.16.10:38388). Nov 24 00:19:18.441208 sshd[5615]: Accepted publickey for core from 10.200.16.10 port 38388 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:18.441686 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:18.449959 systemd-logind[1682]: New session 18 of user core. Nov 24 00:19:18.454071 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:19:19.112930 sshd[5618]: Connection closed by 10.200.16.10 port 38388 Nov 24 00:19:19.116276 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:19.118499 containerd[1703]: time="2025-11-24T00:19:19.118037760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:19:19.123182 systemd-logind[1682]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:19:19.124628 systemd[1]: sshd@15-10.200.4.36:22-10.200.16.10:38388.service: Deactivated successfully. Nov 24 00:19:19.129391 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:19:19.137502 systemd-logind[1682]: Removed session 18. Nov 24 00:19:19.223712 systemd[1]: Started sshd@16-10.200.4.36:22-10.200.16.10:38392.service - OpenSSH per-connection server daemon (10.200.16.10:38392). Nov 24 00:19:19.395108 containerd[1703]: time="2025-11-24T00:19:19.394757806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:19.399185 containerd[1703]: time="2025-11-24T00:19:19.399050078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:19:19.399185 containerd[1703]: time="2025-11-24T00:19:19.399090520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:19:19.399357 kubelet[3164]: E1124 00:19:19.399300 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:19.399698 kubelet[3164]: E1124 00:19:19.399360 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:19:19.399698 kubelet[3164]: E1124 00:19:19.399608 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvlqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55c8987d79-wj8qt_calico-system(ea124eb0-3624-454a-aec9-841dde50238f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:19.400355 containerd[1703]: time="2025-11-24T00:19:19.400325082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:19.401690 kubelet[3164]: E1124 00:19:19.401640 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:19:19.666349 containerd[1703]: time="2025-11-24T00:19:19.666204617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:19.671242 containerd[1703]: time="2025-11-24T00:19:19.671176649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:19.671242 containerd[1703]: time="2025-11-24T00:19:19.671215109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:19.671449 kubelet[3164]: E1124 00:19:19.671408 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:19.671500 kubelet[3164]: E1124 00:19:19.671473 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:19.671642 kubelet[3164]: E1124 00:19:19.671607 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4gpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869ddb6fcd-cvhld_calico-apiserver(b4e92bfb-9155-4b18-ad04-f06b341ea73b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:19.673169 kubelet[3164]: E1124 00:19:19.673104 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:19:19.832861 sshd[5653]: Accepted publickey for core from 10.200.16.10 port 38392 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:19.834573 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:19.842181 systemd-logind[1682]: New session 19 of user core. Nov 24 00:19:19.849081 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:19:20.336932 sshd[5656]: Connection closed by 10.200.16.10 port 38392 Nov 24 00:19:20.337630 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:20.342758 systemd-logind[1682]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:19:20.343339 systemd[1]: sshd@16-10.200.4.36:22-10.200.16.10:38392.service: Deactivated successfully. Nov 24 00:19:20.347494 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:19:20.351224 systemd-logind[1682]: Removed session 19. Nov 24 00:19:24.111584 containerd[1703]: time="2025-11-24T00:19:24.111527821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:19:24.379053 containerd[1703]: time="2025-11-24T00:19:24.378751525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:24.382119 containerd[1703]: time="2025-11-24T00:19:24.382064689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:19:24.382479 containerd[1703]: time="2025-11-24T00:19:24.382217049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:24.382649 kubelet[3164]: E1124 00:19:24.382614 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:24.383067 kubelet[3164]: E1124 00:19:24.382664 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:19:24.384271 kubelet[3164]: E1124 00:19:24.384212 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8qltp_calico-system(e06c5900-d0dc-4011-934f-01926c96ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:24.385455 kubelet[3164]: E1124 00:19:24.385406 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:19:25.116466 containerd[1703]: time="2025-11-24T00:19:25.116414676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:19:25.118260 kubelet[3164]: E1124 00:19:25.118158 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:19:25.382335 containerd[1703]: time="2025-11-24T00:19:25.382148405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:19:25.385302 containerd[1703]: time="2025-11-24T00:19:25.385269141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:19:25.385368 containerd[1703]: time="2025-11-24T00:19:25.385353244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:19:25.385557 kubelet[3164]: E1124 00:19:25.385520 3164 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:25.385844 kubelet[3164]: E1124 00:19:25.385573 3164 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:19:25.385844 kubelet[3164]: E1124 00:19:25.385729 3164 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkd6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78fddc585d-2dpds_calico-apiserver(a5868a48-f0a4-49b1-9a5f-48199ea4ea4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:19:25.387338 kubelet[3164]: E1124 00:19:25.387283 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:19:25.441255 systemd[1]: Started sshd@17-10.200.4.36:22-10.200.16.10:36628.service - OpenSSH per-connection server daemon (10.200.16.10:36628). Nov 24 00:19:26.030447 sshd[5693]: Accepted publickey for core from 10.200.16.10 port 36628 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:26.031730 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:26.036393 systemd-logind[1682]: New session 20 of user core. Nov 24 00:19:26.039074 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:19:26.596885 sshd[5696]: Connection closed by 10.200.16.10 port 36628 Nov 24 00:19:26.598827 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:26.602244 systemd-logind[1682]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:19:26.604186 systemd[1]: sshd@17-10.200.4.36:22-10.200.16.10:36628.service: Deactivated successfully. Nov 24 00:19:26.606696 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:19:26.609873 systemd-logind[1682]: Removed session 20. Nov 24 00:19:29.112029 kubelet[3164]: E1124 00:19:29.111958 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:19:31.114365 kubelet[3164]: E1124 00:19:31.113943 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:19:31.706435 systemd[1]: Started sshd@18-10.200.4.36:22-10.200.16.10:57066.service - OpenSSH per-connection server daemon (10.200.16.10:57066). Nov 24 00:19:32.114427 kubelet[3164]: E1124 00:19:32.114364 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:19:32.319007 sshd[5708]: Accepted publickey for core from 10.200.16.10 port 57066 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:32.320152 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:32.324312 systemd-logind[1682]: New session 21 of user core. Nov 24 00:19:32.332056 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:19:32.838697 sshd[5711]: Connection closed by 10.200.16.10 port 57066 Nov 24 00:19:32.841095 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:32.845554 systemd-logind[1682]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:19:32.847287 systemd[1]: sshd@18-10.200.4.36:22-10.200.16.10:57066.service: Deactivated successfully. Nov 24 00:19:32.850265 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:19:32.854995 systemd-logind[1682]: Removed session 21. Nov 24 00:19:34.112810 kubelet[3164]: E1124 00:19:34.111771 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:19:37.113649 kubelet[3164]: E1124 00:19:37.113568 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:19:37.115413 kubelet[3164]: E1124 00:19:37.115181 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a" Nov 24 00:19:37.115413 kubelet[3164]: E1124 00:19:37.115068 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:19:37.963143 systemd[1]: Started sshd@19-10.200.4.36:22-10.200.16.10:57078.service - OpenSSH per-connection server daemon (10.200.16.10:57078). Nov 24 00:19:38.562706 sshd[5723]: Accepted publickey for core from 10.200.16.10 port 57078 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:38.565760 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:38.575119 systemd-logind[1682]: New session 22 of user core. Nov 24 00:19:38.581297 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:19:39.052783 sshd[5726]: Connection closed by 10.200.16.10 port 57078 Nov 24 00:19:39.053360 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:39.057995 systemd[1]: sshd@19-10.200.4.36:22-10.200.16.10:57078.service: Deactivated successfully. Nov 24 00:19:39.060726 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:19:39.061814 systemd-logind[1682]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:19:39.063891 systemd-logind[1682]: Removed session 22. Nov 24 00:19:41.115923 kubelet[3164]: E1124 00:19:41.115088 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-pdbfs" podUID="17c95f35-9a12-4372-90d3-ee8b8cc1e636" Nov 24 00:19:44.115092 kubelet[3164]: E1124 00:19:44.115053 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869ddb6fcd-cvhld" podUID="b4e92bfb-9155-4b18-ad04-f06b341ea73b" Nov 24 00:19:44.162150 systemd[1]: Started sshd@20-10.200.4.36:22-10.200.16.10:33864.service - OpenSSH per-connection server daemon (10.200.16.10:33864). Nov 24 00:19:44.769468 sshd[5741]: Accepted publickey for core from 10.200.16.10 port 33864 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:44.771239 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:44.774993 systemd-logind[1682]: New session 23 of user core. Nov 24 00:19:44.780017 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:19:45.259226 sshd[5744]: Connection closed by 10.200.16.10 port 33864 Nov 24 00:19:45.259879 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:45.263892 systemd-logind[1682]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:19:45.264253 systemd[1]: sshd@20-10.200.4.36:22-10.200.16.10:33864.service: Deactivated successfully. Nov 24 00:19:45.266355 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:19:45.268323 systemd-logind[1682]: Removed session 23. Nov 24 00:19:47.115527 kubelet[3164]: E1124 00:19:47.115002 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55c8987d79-wj8qt" podUID="ea124eb0-3624-454a-aec9-841dde50238f" Nov 24 00:19:47.116773 kubelet[3164]: E1124 00:19:47.116718 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jtqbh" podUID="84288287-c520-476c-9981-2956ccc0c1dc" Nov 24 00:19:49.115803 kubelet[3164]: E1124 00:19:49.115462 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78fddc585d-2dpds" podUID="a5868a48-f0a4-49b1-9a5f-48199ea4ea4e" Nov 24 00:19:50.369169 systemd[1]: Started sshd@21-10.200.4.36:22-10.200.16.10:37920.service - OpenSSH per-connection server daemon (10.200.16.10:37920). Nov 24 00:19:50.974216 sshd[5782]: Accepted publickey for core from 10.200.16.10 port 37920 ssh2: RSA SHA256:LOblu0hsbXu/lLhYlSIuAoW9valBssOvknMXNLUS7SE Nov 24 00:19:50.975837 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:19:50.979955 systemd-logind[1682]: New session 24 of user core. Nov 24 00:19:50.985379 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:19:51.460374 sshd[5785]: Connection closed by 10.200.16.10 port 37920 Nov 24 00:19:51.461128 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Nov 24 00:19:51.464881 systemd[1]: sshd@21-10.200.4.36:22-10.200.16.10:37920.service: Deactivated successfully. Nov 24 00:19:51.466676 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:19:51.467578 systemd-logind[1682]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:19:51.469311 systemd-logind[1682]: Removed session 24. Nov 24 00:19:52.112550 kubelet[3164]: E1124 00:19:52.112384 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8qltp" podUID="e06c5900-d0dc-4011-934f-01926c96ebe8" Nov 24 00:19:52.114377 kubelet[3164]: E1124 00:19:52.114342 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbb9bbbc6-vdbzq" podUID="0938abdc-cc2b-4018-9eeb-6e2be7bfa61a"