Nov 8 00:24:54.073105 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:24:54.073133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.073142 kernel: BIOS-provided physical RAM map: Nov 8 00:24:54.073150 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:24:54.073166 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:24:54.073177 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:24:54.073184 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:24:54.073194 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 8 00:24:54.073207 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 8 00:24:54.073221 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:24:54.073232 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:24:54.073241 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:24:54.073247 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:24:54.073259 kernel: NX (Execute Disable) protection: active Nov 8 00:24:54.073281 kernel: APIC: Static calls initialized Nov 8 00:24:54.073293 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:24:54.073300 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 8 00:24:54.073307 kernel: SMBIOS 3.1.0 present. Nov 8 00:24:54.073325 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 8 00:24:54.073339 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:24:54.073352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:24:54.073364 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 8 00:24:54.073371 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:24:54.073378 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:24:54.073393 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:24:54.073424 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:24:54.073433 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:24:54.073440 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:24:54.073454 kernel: tsc: Detected 2593.907 MHz processor Nov 8 00:24:54.073466 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:24:54.073474 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:24:54.073481 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:24:54.073497 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:24:54.073515 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:24:54.073523 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:24:54.073530 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:24:54.073546 kernel: Using GB pages for direct mapping Nov 8 00:24:54.073560 kernel: Secure boot disabled Nov 8 00:24:54.073575 kernel: ACPI: Early table checksum verification disabled Nov 8 00:24:54.073582 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:24:54.073597 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073618 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073633 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 8 00:24:54.073643 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:24:54.073651 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073663 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073679 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073693 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073701 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073713 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073730 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073742 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:24:54.073749 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 8 00:24:54.073758 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:24:54.073776 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:24:54.073793 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:24:54.073804 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:24:54.073812 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:24:54.073824 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:24:54.073840 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:24:54.073852 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 8 00:24:54.073860 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:24:54.073870 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:24:54.073885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:24:54.073898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:24:54.073906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:24:54.073918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:24:54.073931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:24:54.073940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:24:54.073948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:24:54.073962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:24:54.073976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:24:54.073986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:24:54.073996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:24:54.074012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:24:54.074026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:24:54.074035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:24:54.074042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:24:54.074057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:24:54.074074 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:24:54.074084 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:24:54.074092 kernel: Zone ranges: Nov 8 00:24:54.074110 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:24:54.074123 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:24:54.074131 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:24:54.074139 kernel: Movable zone start for each node Nov 8 00:24:54.074158 kernel: Early memory node ranges Nov 8 00:24:54.074173 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:24:54.074182 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:24:54.074189 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:24:54.074201 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:24:54.074224 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:24:54.074239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:24:54.074246 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:24:54.074256 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 8 00:24:54.074274 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:24:54.074285 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:24:54.074293 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:24:54.074305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:24:54.074320 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:24:54.074336 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:24:54.074344 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:24:54.074356 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:24:54.074370 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:24:54.074382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:24:54.076427 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:24:54.076452 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:24:54.076465 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:24:54.076474 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:24:54.076490 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:24:54.076500 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:24:54.076512 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.076521 kernel: random: crng init done Nov 8 00:24:54.076531 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:24:54.076541 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:24:54.076550 kernel: Fallback order for Node 0: 0 Nov 8 00:24:54.076557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 8 00:24:54.076569 kernel: Policy zone: Normal Nov 8 00:24:54.076587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:24:54.076598 kernel: software IO TLB: area num 2. Nov 8 00:24:54.076612 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 310124K reserved, 0K cma-reserved) Nov 8 00:24:54.076621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:24:54.076631 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:24:54.076639 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:24:54.076650 kernel: Dynamic Preempt: voluntary Nov 8 00:24:54.076659 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:24:54.076671 kernel: rcu: RCU event tracing is enabled. Nov 8 00:24:54.076681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:24:54.076692 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:24:54.076700 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:24:54.076708 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:24:54.076716 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:24:54.076724 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:24:54.076735 kernel: Using NULL legacy PIC Nov 8 00:24:54.076743 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:24:54.076751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:24:54.076759 kernel: Console: colour dummy device 80x25 Nov 8 00:24:54.076767 kernel: printk: console [tty1] enabled Nov 8 00:24:54.076775 kernel: printk: console [ttyS0] enabled Nov 8 00:24:54.076783 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:24:54.076791 kernel: ACPI: Core revision 20230628 Nov 8 00:24:54.076803 kernel: Failed to register legacy timer interrupt Nov 8 00:24:54.076812 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:24:54.076822 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:24:54.076833 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:24:54.076841 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:24:54.076852 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:24:54.076860 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:24:54.076870 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:24:54.076880 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:24:54.076888 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:24:54.076899 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Nov 8 00:24:54.076910 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:24:54.076921 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:24:54.076929 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:24:54.076941 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:24:54.076949 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:24:54.076958 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:24:54.076968 kernel: RETBleed: Vulnerable Nov 8 00:24:54.076976 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:24:54.076987 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:54.076995 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:54.077007 kernel: active return thunk: its_return_thunk Nov 8 00:24:54.077016 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:24:54.077028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:24:54.077036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:24:54.077047 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:24:54.077059 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:24:54.077067 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:24:54.077078 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:24:54.077089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:24:54.077097 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:24:54.077108 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:24:54.077119 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:24:54.077129 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:24:54.077138 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:24:54.077147 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:24:54.077157 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:24:54.077165 kernel: landlock: Up and running. Nov 8 00:24:54.077176 kernel: SELinux: Initializing. Nov 8 00:24:54.077185 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077194 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077204 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:24:54.077212 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077227 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077235 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077246 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:24:54.077254 kernel: signal: max sigframe size: 3632 Nov 8 00:24:54.077266 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:24:54.077274 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:24:54.077283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:24:54.077293 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:24:54.077301 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:24:54.077314 kernel: .... node #0, CPUs: #1 Nov 8 00:24:54.077322 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:24:54.077334 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:24:54.077342 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:24:54.077352 kernel: smpboot: Max logical packages: 1 Nov 8 00:24:54.077362 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 8 00:24:54.077370 kernel: devtmpfs: initialized Nov 8 00:24:54.077381 kernel: x86/mm: Memory block size: 128MB Nov 8 00:24:54.077391 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:24:54.077402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:24:54.077432 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:24:54.077444 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:24:54.077453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:24:54.077463 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:24:54.077474 kernel: audit: type=2000 audit(1762561492.029:1): state=initialized audit_enabled=0 res=1 Nov 8 00:24:54.077482 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:24:54.077493 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:24:54.077506 kernel: cpuidle: using governor menu Nov 8 00:24:54.077515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:24:54.077526 kernel: dca service started, version 1.12.1 Nov 8 00:24:54.077534 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:24:54.077546 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:24:54.077555 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:24:54.077565 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:24:54.077574 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:24:54.077583 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:24:54.077596 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:24:54.077604 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:24:54.077616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:24:54.077624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:24:54.077635 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:24:54.077644 kernel: ACPI: Interpreter enabled Nov 8 00:24:54.077652 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:24:54.077660 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:24:54.077672 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:24:54.077682 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:24:54.077694 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:24:54.077703 kernel: iommu: Default domain type: Translated Nov 8 00:24:54.077713 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:24:54.077722 kernel: efivars: Registered efivars operations Nov 8 00:24:54.077730 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:24:54.077741 kernel: PCI: System does not support PCI Nov 8 00:24:54.077750 kernel: vgaarb: loaded Nov 8 00:24:54.077760 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:24:54.077771 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:24:54.077779 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:24:54.077790 kernel: pnp: PnP ACPI init Nov 8 00:24:54.077798 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:24:54.077810 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:24:54.077819 kernel: NET: Registered PF_INET protocol family Nov 8 00:24:54.077828 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:24:54.077838 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:24:54.077846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:24:54.077860 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:24:54.077870 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:24:54.077879 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:24:54.077891 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077900 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:24:54.077920 kernel: NET: Registered PF_XDP protocol family Nov 8 00:24:54.077932 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:24:54.077940 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:24:54.077954 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 8 00:24:54.077962 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:24:54.077970 kernel: Initialise system trusted keyrings Nov 8 00:24:54.077981 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:24:54.077989 kernel: Key type asymmetric registered Nov 8 00:24:54.077999 kernel: Asymmetric key parser 'x509' registered Nov 8 00:24:54.078008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:24:54.078017 kernel: io scheduler mq-deadline registered Nov 8 00:24:54.078028 kernel: io scheduler kyber registered Nov 8 00:24:54.078038 kernel: io scheduler bfq registered Nov 8 00:24:54.078050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:24:54.078058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:24:54.078067 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:24:54.078077 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:24:54.078085 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:24:54.078234 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:24:54.078331 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:24:53 UTC (1762561493) Nov 8 00:24:54.079478 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:24:54.079506 kernel: intel_pstate: CPU model not supported Nov 8 00:24:54.079520 kernel: efifb: probing for efifb Nov 8 00:24:54.079534 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:24:54.079547 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:24:54.079561 kernel: efifb: scrolling: redraw Nov 8 00:24:54.079577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:24:54.079591 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:24:54.079606 kernel: fb0: EFI VGA frame buffer device Nov 8 00:24:54.079626 kernel: pstore: Using crash dump compression: deflate Nov 8 00:24:54.079642 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:24:54.079658 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:24:54.079675 kernel: Segment Routing with IPv6 Nov 8 00:24:54.079691 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:24:54.079707 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:24:54.079723 kernel: Key type dns_resolver registered Nov 8 00:24:54.079739 kernel: IPI shorthand broadcast: enabled Nov 8 00:24:54.079756 kernel: sched_clock: Marking stable (833002800, 46609600)->(1079857500, -200245100) Nov 8 00:24:54.079776 kernel: registered taskstats version 1 Nov 8 00:24:54.079790 kernel: Loading compiled-in X.509 certificates Nov 8 00:24:54.079805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:24:54.079821 kernel: Key type .fscrypt registered Nov 8 00:24:54.079835 kernel: Key type fscrypt-provisioning registered Nov 8 00:24:54.079849 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:24:54.079862 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:24:54.079877 kernel: ima: No architecture policies found Nov 8 00:24:54.079891 kernel: clk: Disabling unused clocks Nov 8 00:24:54.079908 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:24:54.079922 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:24:54.079935 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:24:54.079946 kernel: Run /init as init process Nov 8 00:24:54.079956 kernel: with arguments: Nov 8 00:24:54.079966 kernel: /init Nov 8 00:24:54.079978 kernel: with environment: Nov 8 00:24:54.079986 kernel: HOME=/ Nov 8 00:24:54.079997 kernel: TERM=linux Nov 8 00:24:54.080007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:24:54.080023 systemd[1]: Detected virtualization microsoft. Nov 8 00:24:54.080033 systemd[1]: Detected architecture x86-64. Nov 8 00:24:54.080043 systemd[1]: Running in initrd. Nov 8 00:24:54.080052 systemd[1]: No hostname configured, using default hostname. Nov 8 00:24:54.080063 systemd[1]: Hostname set to . Nov 8 00:24:54.080072 systemd[1]: Initializing machine ID from random generator. Nov 8 00:24:54.080085 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:24:54.080094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:24:54.080106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:24:54.080115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:24:54.080126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:24:54.080136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:24:54.080146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:24:54.080159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:24:54.080171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:24:54.080180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:24:54.080190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:24:54.080201 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:24:54.080209 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:24:54.080221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:24:54.080229 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:24:54.080243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:24:54.080251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:24:54.080263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:24:54.080272 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:24:54.080284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:24:54.080293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:24:54.080304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:24:54.080313 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:24:54.080323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:24:54.080336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:24:54.080346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:24:54.080357 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:24:54.080368 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:24:54.080377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:24:54.080419 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:24:54.080445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:54.080457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:24:54.080466 systemd-journald[176]: Journal started Nov 8 00:24:54.080488 systemd-journald[176]: Runtime Journal (/run/log/journal/a51637e3f3814365b89e7319ca1b75e3) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:24:54.093140 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:24:54.095357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:24:54.098763 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:24:54.101324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:54.104338 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:24:54.120630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:54.135600 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:24:54.143538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:24:54.156688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:24:54.158535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:24:54.167618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:24:54.178434 kernel: Bridge firewalling registered Nov 8 00:24:54.179532 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:24:54.185123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:24:54.186350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:24:54.190690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:24:54.195767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:54.203612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:24:54.208527 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:24:54.223107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:24:54.232621 dracut-cmdline[210]: dracut-dracut-053 Nov 8 00:24:54.234698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:24:54.240602 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.290933 systemd-resolved[219]: Positive Trust Anchors: Nov 8 00:24:54.290947 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:24:54.290997 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:24:54.317728 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 8 00:24:54.318906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:24:54.322074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:24:54.341424 kernel: SCSI subsystem initialized Nov 8 00:24:54.351424 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:24:54.362424 kernel: iscsi: registered transport (tcp) Nov 8 00:24:54.382864 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:24:54.382915 kernel: QLogic iSCSI HBA Driver Nov 8 00:24:54.418533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:24:54.425646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:24:54.452349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:24:54.452435 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:24:54.455754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:24:54.495427 kernel: raid6: avx512x4 gen() 18583 MB/s Nov 8 00:24:54.514422 kernel: raid6: avx512x2 gen() 18593 MB/s Nov 8 00:24:54.533416 kernel: raid6: avx512x1 gen() 18679 MB/s Nov 8 00:24:54.552418 kernel: raid6: avx2x4 gen() 18489 MB/s Nov 8 00:24:54.571421 kernel: raid6: avx2x2 gen() 18578 MB/s Nov 8 00:24:54.591953 kernel: raid6: avx2x1 gen() 14216 MB/s Nov 8 00:24:54.591995 kernel: raid6: using algorithm avx512x1 gen() 18679 MB/s Nov 8 00:24:54.613428 kernel: raid6: .... xor() 26953 MB/s, rmw enabled Nov 8 00:24:54.613458 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:24:54.636427 kernel: xor: automatically using best checksumming function avx Nov 8 00:24:54.788434 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:24:54.798294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:24:54.808558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:24:54.821223 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 8 00:24:54.825700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:24:54.841520 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:24:54.854448 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 8 00:24:54.879440 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:24:54.890560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:24:54.932286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:24:54.949596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:24:54.970595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:24:54.977205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:24:54.983552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:24:54.986690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:24:55.001629 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:24:55.028694 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:24:55.032825 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:24:55.042428 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:24:55.063779 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:24:55.063836 kernel: AES CTR mode by8 optimization enabled Nov 8 00:24:55.073446 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:24:55.079399 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:24:55.079470 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:24:55.088908 kernel: PTP clock support registered Nov 8 00:24:55.090964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:24:55.091180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:55.106805 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:24:55.108528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:54.967202 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:24:54.977139 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:24:54.977157 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:24:54.977166 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:24:54.977174 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:24:54.977184 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:24:54.977194 systemd-journald[176]: Time jumped backwards, rotating. Nov 8 00:24:54.937884 systemd-resolved[219]: Clock change detected. Flushing caches. Nov 8 00:24:54.974804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:54.975095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:54.978016 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:54.993937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:55.006201 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:24:55.006430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:55.006551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:55.022633 kernel: scsi host1: storvsc_host_t Nov 8 00:24:55.022810 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:24:55.022822 kernel: scsi host0: storvsc_host_t Nov 8 00:24:55.022281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:55.032820 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:24:55.032877 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 8 00:24:55.047555 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:24:55.052824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:55.065556 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:24:55.071545 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:24:55.072786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:55.083015 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:24:55.083231 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:24:55.088599 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:24:55.095810 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:24:55.096121 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:24:55.102286 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:24:55.102590 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:24:55.102815 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:24:55.104640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:55.110164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:55.117640 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:24:55.256592 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Nov 8 00:24:55.263557 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (450) Nov 8 00:24:55.277678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:24:55.295427 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:24:55.306753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:24:55.314682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:24:55.314801 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:24:55.329736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:24:55.345553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:55.353549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:56.368425 disk-uuid[590]: The operation has completed successfully. Nov 8 00:24:56.372203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:56.445874 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:24:56.445986 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:24:56.476684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:24:56.482429 sh[703]: Success Nov 8 00:24:56.502577 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:24:56.603844 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:24:56.623654 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:24:56.629379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:24:56.648555 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:24:56.648591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:56.654790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:24:56.657552 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:24:56.660039 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:24:56.744095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:24:56.750099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:24:56.759686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:24:56.766291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:24:56.779605 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: VF slot 1 added Nov 8 00:24:56.792551 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:24:56.792586 kernel: hv_pci 7b63dc29-f54f-4b2f-89a3-88f915e0fe26: PCI VMBus probing: Using version 0x10004 Nov 8 00:24:56.802313 kernel: hv_pci 7b63dc29-f54f-4b2f-89a3-88f915e0fe26: PCI host bridge to bus f54f:00 Nov 8 00:24:56.802595 kernel: pci_bus f54f:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:24:56.805800 kernel: pci_bus f54f:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:24:56.812557 kernel: pci f54f:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:24:56.817572 kernel: pci f54f:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:24:56.821574 kernel: pci f54f:00:02.0: enabling Extended Tags Nov 8 00:24:56.835914 kernel: pci f54f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f54f:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:24:56.836110 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:56.836124 kernel: pci_bus f54f:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:24:56.847923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:56.847958 kernel: pci f54f:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:24:56.848174 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:56.869549 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:56.886159 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:24:56.888686 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:56.899058 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:24:56.912692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:24:56.919422 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:24:56.929708 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:24:56.965366 systemd-networkd[887]: lo: Link UP Nov 8 00:24:56.967874 systemd-networkd[887]: lo: Gained carrier Nov 8 00:24:56.969039 systemd-networkd[887]: Enumeration completed Nov 8 00:24:56.969458 systemd-networkd[887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:56.969463 systemd-networkd[887]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:24:56.982962 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:24:56.985764 systemd[1]: Reached target network.target - Network. Nov 8 00:24:56.995119 systemd-networkd[887]: eth0: Link UP Nov 8 00:24:56.997781 systemd-networkd[887]: eth0: Gained carrier Nov 8 00:24:56.999571 systemd-networkd[887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:57.073455 kernel: mlx5_core f54f:00:02.0: enabling device (0000 -> 0002) Nov 8 00:24:57.077629 kernel: mlx5_core f54f:00:02.0: firmware version: 14.30.5006 Nov 8 00:24:57.083598 systemd-networkd[887]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:24:57.309252 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: VF registering: eth1 Nov 8 00:24:57.309651 kernel: mlx5_core f54f:00:02.0 eth1: joined to eth0 Nov 8 00:24:57.325633 kernel: mlx5_core f54f:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:24:57.338488 kernel: mlx5_core f54f:00:02.0 enP62799s1: renamed from eth1 Nov 8 00:24:57.341596 ignition[882]: Ignition 2.19.0 Nov 8 00:24:57.341607 ignition[882]: Stage: fetch-offline Nov 8 00:24:57.341643 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.341654 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.345222 systemd-networkd[887]: eth1: Interface name change detected, renamed to enP62799s1. Nov 8 00:24:57.341801 ignition[882]: parsed url from cmdline: "" Nov 8 00:24:57.341806 ignition[882]: no config URL provided Nov 8 00:24:57.341813 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:24:57.341823 ignition[882]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:24:57.341829 ignition[882]: failed to fetch config: resource requires networking Nov 8 00:24:57.350677 ignition[882]: Ignition finished successfully Nov 8 00:24:57.365906 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:24:57.379817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:24:57.397804 ignition[907]: Ignition 2.19.0 Nov 8 00:24:57.397815 ignition[907]: Stage: fetch Nov 8 00:24:57.398030 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.398043 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.399663 ignition[907]: parsed url from cmdline: "" Nov 8 00:24:57.399669 ignition[907]: no config URL provided Nov 8 00:24:57.399676 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:24:57.399687 ignition[907]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:24:57.399713 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:24:57.468804 kernel: mlx5_core f54f:00:02.0 enP62799s1: Link up Nov 8 00:24:57.468625 systemd-networkd[887]: enP62799s1: Link UP Nov 8 00:24:57.482697 ignition[907]: GET result: OK Nov 8 00:24:57.482804 ignition[907]: config has been read from IMDS userdata Nov 8 00:24:57.482837 ignition[907]: parsing config with SHA512: 6c2ecc4c611834c8b6789170854a653d9107590896be8a1818db62e6b764b4c8a3b05bfac982b1f360e6ab0f21f563a05fa8c74addda9cfcb185168112d3f38b Nov 8 00:24:57.491640 unknown[907]: fetched base config from "system" Nov 8 00:24:57.499624 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: Data path switched to VF: enP62799s1 Nov 8 00:24:57.492351 ignition[907]: fetch: fetch complete Nov 8 00:24:57.491667 unknown[907]: fetched base config from "system" Nov 8 00:24:57.492359 ignition[907]: fetch: fetch passed Nov 8 00:24:57.491676 unknown[907]: fetched user config from "azure" Nov 8 00:24:57.492411 ignition[907]: Ignition finished successfully Nov 8 00:24:57.495998 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:24:57.512752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:24:57.529118 ignition[913]: Ignition 2.19.0 Nov 8 00:24:57.529127 ignition[913]: Stage: kargs Nov 8 00:24:57.532016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:24:57.529342 ignition[913]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.529355 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.530217 ignition[913]: kargs: kargs passed Nov 8 00:24:57.530257 ignition[913]: Ignition finished successfully Nov 8 00:24:57.544728 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:24:57.558393 ignition[919]: Ignition 2.19.0 Nov 8 00:24:57.558403 ignition[919]: Stage: disks Nov 8 00:24:57.561092 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:24:57.558648 ignition[919]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.564732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:24:57.558661 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.569826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:24:57.559568 ignition[919]: disks: disks passed Nov 8 00:24:57.573034 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:24:57.559612 ignition[919]: Ignition finished successfully Nov 8 00:24:57.578013 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:24:57.586475 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:24:57.606693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:24:57.632874 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:24:57.638593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:24:57.649782 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:24:57.742556 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:24:57.742710 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:24:57.743342 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:24:57.755609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:24:57.768636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Nov 8 00:24:57.774583 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:57.774618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:57.772659 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:24:57.788688 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:57.781683 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:24:57.794621 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:57.785799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:24:57.785835 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:24:57.803982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:24:57.808599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:24:57.820677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:24:57.971186 coreos-metadata[953]: Nov 08 00:24:57.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:24:57.975566 coreos-metadata[953]: Nov 08 00:24:57.973 INFO Fetch successful Nov 8 00:24:57.975566 coreos-metadata[953]: Nov 08 00:24:57.973 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:24:57.983400 coreos-metadata[953]: Nov 08 00:24:57.983 INFO Fetch successful Nov 8 00:24:57.986604 coreos-metadata[953]: Nov 08 00:24:57.986 INFO wrote hostname ci-4081.3.6-n-75d3e74165 to /sysroot/etc/hostname Nov 8 00:24:57.988064 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:24:58.012753 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:24:58.027451 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:24:58.032827 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:24:58.039320 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:24:58.042809 systemd-networkd[887]: enP62799s1: Gained carrier Nov 8 00:24:58.045079 systemd-networkd[887]: eth0: Gained IPv6LL Nov 8 00:24:58.321655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:24:58.333633 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:24:58.345649 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:24:58.357466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:24:58.363451 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.388585 ignition[1056]: INFO : Ignition 2.19.0 Nov 8 00:24:58.388585 ignition[1056]: INFO : Stage: mount Nov 8 00:24:58.388585 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:58.388585 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:58.403609 ignition[1056]: INFO : mount: mount passed Nov 8 00:24:58.403609 ignition[1056]: INFO : Ignition finished successfully Nov 8 00:24:58.391136 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:24:58.393858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:24:58.414689 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:24:58.429719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:24:58.444590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1067) Nov 8 00:24:58.448561 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.448590 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:58.453363 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:58.459555 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:58.461732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:24:58.487554 ignition[1084]: INFO : Ignition 2.19.0 Nov 8 00:24:58.487554 ignition[1084]: INFO : Stage: files Nov 8 00:24:58.487554 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:58.487554 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:58.497545 ignition[1084]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:24:58.497545 ignition[1084]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:24:58.497545 ignition[1084]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:24:58.520378 ignition[1084]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:24:58.523993 ignition[1084]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:24:58.523993 ignition[1084]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:24:58.520811 unknown[1084]: wrote ssh authorized keys file for user: core Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:24:58.621189 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:24:58.685223 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:24:59.037012 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:24:59.377304 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:59.377304 ignition[1084]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: files passed Nov 8 00:24:59.392218 ignition[1084]: INFO : Ignition finished successfully Nov 8 00:24:59.384665 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:24:59.449697 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:24:59.456047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:24:59.459275 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:24:59.461585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:24:59.474253 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.474253 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.485839 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.478319 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:24:59.493039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:24:59.503681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:24:59.526385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:24:59.526497 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:24:59.535809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:24:59.541267 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:24:59.544084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:24:59.552715 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:24:59.567168 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:24:59.579663 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:24:59.594783 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:24:59.600817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:24:59.606791 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:24:59.611432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:24:59.611590 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:24:59.617553 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:24:59.622483 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:24:59.627368 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:24:59.632332 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:24:59.640590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:24:59.646356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:24:59.648975 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:24:59.654764 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:24:59.660734 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:24:59.665718 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:24:59.670468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:24:59.670642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:24:59.675706 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:24:59.682814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:24:59.691016 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:24:59.693294 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:24:59.696631 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:24:59.696763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:24:59.707920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:24:59.708106 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:24:59.717133 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:24:59.717307 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:24:59.722388 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:24:59.722503 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:24:59.742723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:24:59.748902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:24:59.750283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:24:59.752279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:24:59.761754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:24:59.761955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:24:59.777822 ignition[1137]: INFO : Ignition 2.19.0 Nov 8 00:24:59.777822 ignition[1137]: INFO : Stage: umount Nov 8 00:24:59.777822 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.777822 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:59.771133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:24:59.790456 ignition[1137]: INFO : umount: umount passed Nov 8 00:24:59.790456 ignition[1137]: INFO : Ignition finished successfully Nov 8 00:24:59.771259 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:24:59.780287 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:24:59.780491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:24:59.787939 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:24:59.788034 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:24:59.811071 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:24:59.811145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:24:59.816300 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:24:59.816357 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:24:59.821366 systemd[1]: Stopped target network.target - Network. Nov 8 00:24:59.823481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:24:59.823547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:24:59.828695 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:24:59.842028 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:24:59.844785 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:24:59.852281 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:24:59.856942 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:24:59.861615 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:24:59.861674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:24:59.866121 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:24:59.866162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:24:59.871063 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:24:59.871117 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:24:59.875745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:24:59.875795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:24:59.878708 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:24:59.883850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:24:59.889650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:24:59.900605 systemd-networkd[887]: eth0: DHCPv6 lease lost Nov 8 00:24:59.902525 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:24:59.902640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:24:59.908142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:24:59.908223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:24:59.927695 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:24:59.932836 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:24:59.932908 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:24:59.941833 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:24:59.945864 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:24:59.945969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:24:59.963837 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:24:59.966369 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:24:59.974299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:24:59.974641 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:24:59.979713 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:24:59.979757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:24:59.987938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:24:59.987996 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:24:59.998120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:24:59.998186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:25:00.003786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:25:00.003831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:25:00.015547 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: Data path switched from VF: enP62799s1 Nov 8 00:25:00.023728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:25:00.026635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:25:00.026706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:25:00.029375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:25:00.029430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:25:00.032911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:25:00.032956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:25:00.054247 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:25:00.054317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:25:00.060184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:25:00.060241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:25:00.072557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:25:00.072627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:25:00.081487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:25:00.081562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:00.087422 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:25:00.087520 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:25:00.092744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:25:00.092833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:25:00.610515 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:25:00.610729 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:25:00.617446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:25:00.621622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:25:00.621691 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:25:00.637770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:25:00.741558 systemd[1]: Switching root. Nov 8 00:25:00.774576 systemd-journald[176]: Journal stopped Nov 8 00:24:54.073105 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:24:54.073133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.073142 kernel: BIOS-provided physical RAM map: Nov 8 00:24:54.073150 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:24:54.073166 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:24:54.073177 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:24:54.073184 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:24:54.073194 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 8 00:24:54.073207 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 8 00:24:54.073221 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:24:54.073232 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:24:54.073241 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:24:54.073247 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:24:54.073259 kernel: NX (Execute Disable) protection: active Nov 8 00:24:54.073281 kernel: APIC: Static calls initialized Nov 8 00:24:54.073293 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:24:54.073300 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 8 00:24:54.073307 kernel: SMBIOS 3.1.0 present. Nov 8 00:24:54.073325 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 8 00:24:54.073339 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:24:54.073352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:24:54.073364 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 8 00:24:54.073371 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:24:54.073378 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:24:54.073393 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:24:54.073424 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:24:54.073433 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:24:54.073440 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:24:54.073454 kernel: tsc: Detected 2593.907 MHz processor Nov 8 00:24:54.073466 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:24:54.073474 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:24:54.073481 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:24:54.073497 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:24:54.073515 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:24:54.073523 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:24:54.073530 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:24:54.073546 kernel: Using GB pages for direct mapping Nov 8 00:24:54.073560 kernel: Secure boot disabled Nov 8 00:24:54.073575 kernel: ACPI: Early table checksum verification disabled Nov 8 00:24:54.073582 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:24:54.073597 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073618 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073633 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 8 00:24:54.073643 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:24:54.073651 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073663 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073679 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073693 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073701 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073713 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073730 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:24:54.073742 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:24:54.073749 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 8 00:24:54.073758 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:24:54.073776 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:24:54.073793 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:24:54.073804 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:24:54.073812 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:24:54.073824 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:24:54.073840 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:24:54.073852 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 8 00:24:54.073860 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:24:54.073870 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:24:54.073885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:24:54.073898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:24:54.073906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:24:54.073918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:24:54.073931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:24:54.073940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:24:54.073948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:24:54.073962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:24:54.073976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:24:54.073986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:24:54.073996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:24:54.074012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:24:54.074026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:24:54.074035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:24:54.074042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:24:54.074057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:24:54.074074 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:24:54.074084 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:24:54.074092 kernel: Zone ranges: Nov 8 00:24:54.074110 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:24:54.074123 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:24:54.074131 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:24:54.074139 kernel: Movable zone start for each node Nov 8 00:24:54.074158 kernel: Early memory node ranges Nov 8 00:24:54.074173 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:24:54.074182 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:24:54.074189 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:24:54.074201 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:24:54.074224 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:24:54.074239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:24:54.074246 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:24:54.074256 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 8 00:24:54.074274 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:24:54.074285 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:24:54.074293 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:24:54.074305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:24:54.074320 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:24:54.074336 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:24:54.074344 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:24:54.074356 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:24:54.074370 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:24:54.074382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:24:54.076427 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:24:54.076452 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:24:54.076465 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:24:54.076474 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:24:54.076490 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:24:54.076500 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:24:54.076512 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.076521 kernel: random: crng init done Nov 8 00:24:54.076531 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:24:54.076541 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:24:54.076550 kernel: Fallback order for Node 0: 0 Nov 8 00:24:54.076557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 8 00:24:54.076569 kernel: Policy zone: Normal Nov 8 00:24:54.076587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:24:54.076598 kernel: software IO TLB: area num 2. Nov 8 00:24:54.076612 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 310124K reserved, 0K cma-reserved) Nov 8 00:24:54.076621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:24:54.076631 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:24:54.076639 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:24:54.076650 kernel: Dynamic Preempt: voluntary Nov 8 00:24:54.076659 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:24:54.076671 kernel: rcu: RCU event tracing is enabled. Nov 8 00:24:54.076681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:24:54.076692 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:24:54.076700 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:24:54.076708 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:24:54.076716 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:24:54.076724 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:24:54.076735 kernel: Using NULL legacy PIC Nov 8 00:24:54.076743 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:24:54.076751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:24:54.076759 kernel: Console: colour dummy device 80x25 Nov 8 00:24:54.076767 kernel: printk: console [tty1] enabled Nov 8 00:24:54.076775 kernel: printk: console [ttyS0] enabled Nov 8 00:24:54.076783 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:24:54.076791 kernel: ACPI: Core revision 20230628 Nov 8 00:24:54.076803 kernel: Failed to register legacy timer interrupt Nov 8 00:24:54.076812 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:24:54.076822 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:24:54.076833 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:24:54.076841 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:24:54.076852 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:24:54.076860 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:24:54.076870 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:24:54.076880 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:24:54.076888 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:24:54.076899 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Nov 8 00:24:54.076910 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:24:54.076921 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:24:54.076929 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:24:54.076941 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:24:54.076949 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:24:54.076958 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:24:54.076968 kernel: RETBleed: Vulnerable Nov 8 00:24:54.076976 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:24:54.076987 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:54.076995 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:54.077007 kernel: active return thunk: its_return_thunk Nov 8 00:24:54.077016 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:24:54.077028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:24:54.077036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:24:54.077047 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:24:54.077059 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:24:54.077067 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:24:54.077078 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:24:54.077089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:24:54.077097 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:24:54.077108 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:24:54.077119 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:24:54.077129 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:24:54.077138 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:24:54.077147 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:24:54.077157 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:24:54.077165 kernel: landlock: Up and running. Nov 8 00:24:54.077176 kernel: SELinux: Initializing. Nov 8 00:24:54.077185 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077194 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077204 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:24:54.077212 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077227 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077235 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:54.077246 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:24:54.077254 kernel: signal: max sigframe size: 3632 Nov 8 00:24:54.077266 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:24:54.077274 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:24:54.077283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:24:54.077293 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:24:54.077301 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:24:54.077314 kernel: .... node #0, CPUs: #1 Nov 8 00:24:54.077322 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:24:54.077334 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:24:54.077342 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:24:54.077352 kernel: smpboot: Max logical packages: 1 Nov 8 00:24:54.077362 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 8 00:24:54.077370 kernel: devtmpfs: initialized Nov 8 00:24:54.077381 kernel: x86/mm: Memory block size: 128MB Nov 8 00:24:54.077391 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:24:54.077402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:24:54.077432 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:24:54.077444 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:24:54.077453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:24:54.077463 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:24:54.077474 kernel: audit: type=2000 audit(1762561492.029:1): state=initialized audit_enabled=0 res=1 Nov 8 00:24:54.077482 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:24:54.077493 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:24:54.077506 kernel: cpuidle: using governor menu Nov 8 00:24:54.077515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:24:54.077526 kernel: dca service started, version 1.12.1 Nov 8 00:24:54.077534 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:24:54.077546 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:24:54.077555 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:24:54.077565 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:24:54.077574 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:24:54.077583 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:24:54.077596 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:24:54.077604 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:24:54.077616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:24:54.077624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:24:54.077635 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:24:54.077644 kernel: ACPI: Interpreter enabled Nov 8 00:24:54.077652 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:24:54.077660 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:24:54.077672 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:24:54.077682 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:24:54.077694 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:24:54.077703 kernel: iommu: Default domain type: Translated Nov 8 00:24:54.077713 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:24:54.077722 kernel: efivars: Registered efivars operations Nov 8 00:24:54.077730 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:24:54.077741 kernel: PCI: System does not support PCI Nov 8 00:24:54.077750 kernel: vgaarb: loaded Nov 8 00:24:54.077760 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:24:54.077771 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:24:54.077779 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:24:54.077790 kernel: pnp: PnP ACPI init Nov 8 00:24:54.077798 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:24:54.077810 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:24:54.077819 kernel: NET: Registered PF_INET protocol family Nov 8 00:24:54.077828 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:24:54.077838 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:24:54.077846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:24:54.077860 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:24:54.077870 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:24:54.077879 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:24:54.077891 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077900 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:24:54.077910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:24:54.077920 kernel: NET: Registered PF_XDP protocol family Nov 8 00:24:54.077932 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:24:54.077940 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:24:54.077954 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 8 00:24:54.077962 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:24:54.077970 kernel: Initialise system trusted keyrings Nov 8 00:24:54.077981 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:24:54.077989 kernel: Key type asymmetric registered Nov 8 00:24:54.077999 kernel: Asymmetric key parser 'x509' registered Nov 8 00:24:54.078008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:24:54.078017 kernel: io scheduler mq-deadline registered Nov 8 00:24:54.078028 kernel: io scheduler kyber registered Nov 8 00:24:54.078038 kernel: io scheduler bfq registered Nov 8 00:24:54.078050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:24:54.078058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:24:54.078067 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:24:54.078077 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:24:54.078085 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:24:54.078234 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:24:54.078331 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:24:53 UTC (1762561493) Nov 8 00:24:54.079478 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:24:54.079506 kernel: intel_pstate: CPU model not supported Nov 8 00:24:54.079520 kernel: efifb: probing for efifb Nov 8 00:24:54.079534 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:24:54.079547 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:24:54.079561 kernel: efifb: scrolling: redraw Nov 8 00:24:54.079577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:24:54.079591 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:24:54.079606 kernel: fb0: EFI VGA frame buffer device Nov 8 00:24:54.079626 kernel: pstore: Using crash dump compression: deflate Nov 8 00:24:54.079642 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:24:54.079658 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:24:54.079675 kernel: Segment Routing with IPv6 Nov 8 00:24:54.079691 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:24:54.079707 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:24:54.079723 kernel: Key type dns_resolver registered Nov 8 00:24:54.079739 kernel: IPI shorthand broadcast: enabled Nov 8 00:24:54.079756 kernel: sched_clock: Marking stable (833002800, 46609600)->(1079857500, -200245100) Nov 8 00:24:54.079776 kernel: registered taskstats version 1 Nov 8 00:24:54.079790 kernel: Loading compiled-in X.509 certificates Nov 8 00:24:54.079805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:24:54.079821 kernel: Key type .fscrypt registered Nov 8 00:24:54.079835 kernel: Key type fscrypt-provisioning registered Nov 8 00:24:54.079849 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:24:54.079862 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:24:54.079877 kernel: ima: No architecture policies found Nov 8 00:24:54.079891 kernel: clk: Disabling unused clocks Nov 8 00:24:54.079908 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:24:54.079922 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:24:54.079935 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:24:54.079946 kernel: Run /init as init process Nov 8 00:24:54.079956 kernel: with arguments: Nov 8 00:24:54.079966 kernel: /init Nov 8 00:24:54.079978 kernel: with environment: Nov 8 00:24:54.079986 kernel: HOME=/ Nov 8 00:24:54.079997 kernel: TERM=linux Nov 8 00:24:54.080007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:24:54.080023 systemd[1]: Detected virtualization microsoft. Nov 8 00:24:54.080033 systemd[1]: Detected architecture x86-64. Nov 8 00:24:54.080043 systemd[1]: Running in initrd. Nov 8 00:24:54.080052 systemd[1]: No hostname configured, using default hostname. Nov 8 00:24:54.080063 systemd[1]: Hostname set to . Nov 8 00:24:54.080072 systemd[1]: Initializing machine ID from random generator. Nov 8 00:24:54.080085 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:24:54.080094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:24:54.080106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:24:54.080115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:24:54.080126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:24:54.080136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:24:54.080146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:24:54.080159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:24:54.080171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:24:54.080180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:24:54.080190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:24:54.080201 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:24:54.080209 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:24:54.080221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:24:54.080229 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:24:54.080243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:24:54.080251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:24:54.080263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:24:54.080272 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:24:54.080284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:24:54.080293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:24:54.080304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:24:54.080313 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:24:54.080323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:24:54.080336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:24:54.080346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:24:54.080357 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:24:54.080368 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:24:54.080377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:24:54.080419 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:24:54.080445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:54.080457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:24:54.080466 systemd-journald[176]: Journal started Nov 8 00:24:54.080488 systemd-journald[176]: Runtime Journal (/run/log/journal/a51637e3f3814365b89e7319ca1b75e3) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:24:54.093140 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:24:54.095357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:24:54.098763 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:24:54.101324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:54.104338 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:24:54.120630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:54.135600 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:24:54.143538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:24:54.156688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:24:54.158535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:24:54.167618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:24:54.178434 kernel: Bridge firewalling registered Nov 8 00:24:54.179532 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:24:54.185123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:24:54.186350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:24:54.190690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:24:54.195767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:54.203612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:24:54.208527 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:24:54.223107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:24:54.232621 dracut-cmdline[210]: dracut-dracut-053 Nov 8 00:24:54.234698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:24:54.240602 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:54.290933 systemd-resolved[219]: Positive Trust Anchors: Nov 8 00:24:54.290947 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:24:54.290997 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:24:54.317728 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 8 00:24:54.318906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:24:54.322074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:24:54.341424 kernel: SCSI subsystem initialized Nov 8 00:24:54.351424 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:24:54.362424 kernel: iscsi: registered transport (tcp) Nov 8 00:24:54.382864 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:24:54.382915 kernel: QLogic iSCSI HBA Driver Nov 8 00:24:54.418533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:24:54.425646 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:24:54.452349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:24:54.452435 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:24:54.455754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:24:54.495427 kernel: raid6: avx512x4 gen() 18583 MB/s Nov 8 00:24:54.514422 kernel: raid6: avx512x2 gen() 18593 MB/s Nov 8 00:24:54.533416 kernel: raid6: avx512x1 gen() 18679 MB/s Nov 8 00:24:54.552418 kernel: raid6: avx2x4 gen() 18489 MB/s Nov 8 00:24:54.571421 kernel: raid6: avx2x2 gen() 18578 MB/s Nov 8 00:24:54.591953 kernel: raid6: avx2x1 gen() 14216 MB/s Nov 8 00:24:54.591995 kernel: raid6: using algorithm avx512x1 gen() 18679 MB/s Nov 8 00:24:54.613428 kernel: raid6: .... xor() 26953 MB/s, rmw enabled Nov 8 00:24:54.613458 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:24:54.636427 kernel: xor: automatically using best checksumming function avx Nov 8 00:24:54.788434 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:24:54.798294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:24:54.808558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:24:54.821223 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 8 00:24:54.825700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:24:54.841520 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:24:54.854448 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 8 00:24:54.879440 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:24:54.890560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:24:54.932286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:24:54.949596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:24:54.970595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:24:54.977205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:24:54.983552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:24:54.986690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:24:55.001629 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:24:55.028694 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:24:55.032825 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:24:55.042428 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:24:55.063779 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:24:55.063836 kernel: AES CTR mode by8 optimization enabled Nov 8 00:24:55.073446 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:24:55.079399 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:24:55.079470 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:24:55.088908 kernel: PTP clock support registered Nov 8 00:24:55.090964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:24:55.091180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:55.106805 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:24:55.108528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:54.967202 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:24:54.977139 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:24:54.977157 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:24:54.977166 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:24:54.977174 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:24:54.977184 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:24:54.977194 systemd-journald[176]: Time jumped backwards, rotating. Nov 8 00:24:54.937884 systemd-resolved[219]: Clock change detected. Flushing caches. Nov 8 00:24:54.974804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:54.975095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:54.978016 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:54.993937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:55.006201 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:24:55.006430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:55.006551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:55.022633 kernel: scsi host1: storvsc_host_t Nov 8 00:24:55.022810 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:24:55.022822 kernel: scsi host0: storvsc_host_t Nov 8 00:24:55.022281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:55.032820 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:24:55.032877 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 8 00:24:55.047555 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:24:55.052824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:55.065556 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:24:55.071545 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:24:55.072786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:55.083015 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:24:55.083231 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:24:55.088599 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:24:55.095810 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:24:55.096121 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:24:55.102286 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:24:55.102590 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:24:55.102815 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:24:55.104640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:55.110164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:55.117640 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:24:55.256592 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Nov 8 00:24:55.263557 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (450) Nov 8 00:24:55.277678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:24:55.295427 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:24:55.306753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:24:55.314682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:24:55.314801 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:24:55.329736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:24:55.345553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:55.353549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:56.368425 disk-uuid[590]: The operation has completed successfully. Nov 8 00:24:56.372203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:24:56.445874 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:24:56.445986 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:24:56.476684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:24:56.482429 sh[703]: Success Nov 8 00:24:56.502577 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:24:56.603844 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:24:56.623654 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:24:56.629379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:24:56.648555 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:24:56.648591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:56.654790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:24:56.657552 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:24:56.660039 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:24:56.744095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:24:56.750099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:24:56.759686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:24:56.766291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:24:56.779605 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: VF slot 1 added Nov 8 00:24:56.792551 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:24:56.792586 kernel: hv_pci 7b63dc29-f54f-4b2f-89a3-88f915e0fe26: PCI VMBus probing: Using version 0x10004 Nov 8 00:24:56.802313 kernel: hv_pci 7b63dc29-f54f-4b2f-89a3-88f915e0fe26: PCI host bridge to bus f54f:00 Nov 8 00:24:56.802595 kernel: pci_bus f54f:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:24:56.805800 kernel: pci_bus f54f:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:24:56.812557 kernel: pci f54f:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:24:56.817572 kernel: pci f54f:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:24:56.821574 kernel: pci f54f:00:02.0: enabling Extended Tags Nov 8 00:24:56.835914 kernel: pci f54f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f54f:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:24:56.836110 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:56.836124 kernel: pci_bus f54f:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:24:56.847923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:56.847958 kernel: pci f54f:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:24:56.848174 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:56.869549 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:56.886159 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:24:56.888686 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:56.899058 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:24:56.912692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:24:56.919422 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:24:56.929708 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:24:56.965366 systemd-networkd[887]: lo: Link UP Nov 8 00:24:56.967874 systemd-networkd[887]: lo: Gained carrier Nov 8 00:24:56.969039 systemd-networkd[887]: Enumeration completed Nov 8 00:24:56.969458 systemd-networkd[887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:56.969463 systemd-networkd[887]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:24:56.982962 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:24:56.985764 systemd[1]: Reached target network.target - Network. Nov 8 00:24:56.995119 systemd-networkd[887]: eth0: Link UP Nov 8 00:24:56.997781 systemd-networkd[887]: eth0: Gained carrier Nov 8 00:24:56.999571 systemd-networkd[887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:57.073455 kernel: mlx5_core f54f:00:02.0: enabling device (0000 -> 0002) Nov 8 00:24:57.077629 kernel: mlx5_core f54f:00:02.0: firmware version: 14.30.5006 Nov 8 00:24:57.083598 systemd-networkd[887]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:24:57.309252 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: VF registering: eth1 Nov 8 00:24:57.309651 kernel: mlx5_core f54f:00:02.0 eth1: joined to eth0 Nov 8 00:24:57.325633 kernel: mlx5_core f54f:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:24:57.338488 kernel: mlx5_core f54f:00:02.0 enP62799s1: renamed from eth1 Nov 8 00:24:57.341596 ignition[882]: Ignition 2.19.0 Nov 8 00:24:57.341607 ignition[882]: Stage: fetch-offline Nov 8 00:24:57.341643 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.341654 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.345222 systemd-networkd[887]: eth1: Interface name change detected, renamed to enP62799s1. Nov 8 00:24:57.341801 ignition[882]: parsed url from cmdline: "" Nov 8 00:24:57.341806 ignition[882]: no config URL provided Nov 8 00:24:57.341813 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:24:57.341823 ignition[882]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:24:57.341829 ignition[882]: failed to fetch config: resource requires networking Nov 8 00:24:57.350677 ignition[882]: Ignition finished successfully Nov 8 00:24:57.365906 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:24:57.379817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:24:57.397804 ignition[907]: Ignition 2.19.0 Nov 8 00:24:57.397815 ignition[907]: Stage: fetch Nov 8 00:24:57.398030 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.398043 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.399663 ignition[907]: parsed url from cmdline: "" Nov 8 00:24:57.399669 ignition[907]: no config URL provided Nov 8 00:24:57.399676 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:24:57.399687 ignition[907]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:24:57.399713 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:24:57.468804 kernel: mlx5_core f54f:00:02.0 enP62799s1: Link up Nov 8 00:24:57.468625 systemd-networkd[887]: enP62799s1: Link UP Nov 8 00:24:57.482697 ignition[907]: GET result: OK Nov 8 00:24:57.482804 ignition[907]: config has been read from IMDS userdata Nov 8 00:24:57.482837 ignition[907]: parsing config with SHA512: 6c2ecc4c611834c8b6789170854a653d9107590896be8a1818db62e6b764b4c8a3b05bfac982b1f360e6ab0f21f563a05fa8c74addda9cfcb185168112d3f38b Nov 8 00:24:57.491640 unknown[907]: fetched base config from "system" Nov 8 00:24:57.499624 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: Data path switched to VF: enP62799s1 Nov 8 00:24:57.492351 ignition[907]: fetch: fetch complete Nov 8 00:24:57.491667 unknown[907]: fetched base config from "system" Nov 8 00:24:57.492359 ignition[907]: fetch: fetch passed Nov 8 00:24:57.491676 unknown[907]: fetched user config from "azure" Nov 8 00:24:57.492411 ignition[907]: Ignition finished successfully Nov 8 00:24:57.495998 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:24:57.512752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:24:57.529118 ignition[913]: Ignition 2.19.0 Nov 8 00:24:57.529127 ignition[913]: Stage: kargs Nov 8 00:24:57.532016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:24:57.529342 ignition[913]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.529355 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.530217 ignition[913]: kargs: kargs passed Nov 8 00:24:57.530257 ignition[913]: Ignition finished successfully Nov 8 00:24:57.544728 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:24:57.558393 ignition[919]: Ignition 2.19.0 Nov 8 00:24:57.558403 ignition[919]: Stage: disks Nov 8 00:24:57.561092 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:24:57.558648 ignition[919]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:57.564732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:24:57.558661 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:57.569826 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:24:57.559568 ignition[919]: disks: disks passed Nov 8 00:24:57.573034 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:24:57.559612 ignition[919]: Ignition finished successfully Nov 8 00:24:57.578013 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:24:57.586475 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:24:57.606693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:24:57.632874 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:24:57.638593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:24:57.649782 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:24:57.742556 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:24:57.742710 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:24:57.743342 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:24:57.755609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:24:57.768636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Nov 8 00:24:57.774583 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:57.774618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:57.772659 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:24:57.788688 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:57.781683 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:24:57.794621 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:57.785799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:24:57.785835 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:24:57.803982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:24:57.808599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:24:57.820677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:24:57.971186 coreos-metadata[953]: Nov 08 00:24:57.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:24:57.975566 coreos-metadata[953]: Nov 08 00:24:57.973 INFO Fetch successful Nov 8 00:24:57.975566 coreos-metadata[953]: Nov 08 00:24:57.973 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:24:57.983400 coreos-metadata[953]: Nov 08 00:24:57.983 INFO Fetch successful Nov 8 00:24:57.986604 coreos-metadata[953]: Nov 08 00:24:57.986 INFO wrote hostname ci-4081.3.6-n-75d3e74165 to /sysroot/etc/hostname Nov 8 00:24:57.988064 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:24:58.012753 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:24:58.027451 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:24:58.032827 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:24:58.039320 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:24:58.042809 systemd-networkd[887]: enP62799s1: Gained carrier Nov 8 00:24:58.045079 systemd-networkd[887]: eth0: Gained IPv6LL Nov 8 00:24:58.321655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:24:58.333633 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:24:58.345649 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:24:58.357466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:24:58.363451 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.388585 ignition[1056]: INFO : Ignition 2.19.0 Nov 8 00:24:58.388585 ignition[1056]: INFO : Stage: mount Nov 8 00:24:58.388585 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:58.388585 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:58.403609 ignition[1056]: INFO : mount: mount passed Nov 8 00:24:58.403609 ignition[1056]: INFO : Ignition finished successfully Nov 8 00:24:58.391136 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:24:58.393858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:24:58.414689 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:24:58.429719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:24:58.444590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1067) Nov 8 00:24:58.448561 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.448590 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:58.453363 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:24:58.459555 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:24:58.461732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:24:58.487554 ignition[1084]: INFO : Ignition 2.19.0 Nov 8 00:24:58.487554 ignition[1084]: INFO : Stage: files Nov 8 00:24:58.487554 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:58.487554 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:58.497545 ignition[1084]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:24:58.497545 ignition[1084]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:24:58.497545 ignition[1084]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:24:58.520378 ignition[1084]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:24:58.523993 ignition[1084]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:24:58.523993 ignition[1084]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:24:58.520811 unknown[1084]: wrote ssh authorized keys file for user: core Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:24:58.533232 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:24:58.621189 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:24:58.685223 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:58.691576 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:24:59.037012 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:24:59.377304 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:24:59.377304 ignition[1084]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:24:59.392218 ignition[1084]: INFO : files: files passed Nov 8 00:24:59.392218 ignition[1084]: INFO : Ignition finished successfully Nov 8 00:24:59.384665 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:24:59.449697 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:24:59.456047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:24:59.459275 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:24:59.461585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:24:59.474253 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.474253 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.485839 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:24:59.478319 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:24:59.493039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:24:59.503681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:24:59.526385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:24:59.526497 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:24:59.535809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:24:59.541267 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:24:59.544084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:24:59.552715 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:24:59.567168 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:24:59.579663 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:24:59.594783 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:24:59.600817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:24:59.606791 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:24:59.611432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:24:59.611590 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:24:59.617553 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:24:59.622483 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:24:59.627368 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:24:59.632332 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:24:59.640590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:24:59.646356 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:24:59.648975 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:24:59.654764 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:24:59.660734 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:24:59.665718 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:24:59.670468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:24:59.670642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:24:59.675706 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:24:59.682814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:24:59.691016 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:24:59.693294 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:24:59.696631 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:24:59.696763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:24:59.707920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:24:59.708106 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:24:59.717133 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:24:59.717307 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:24:59.722388 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:24:59.722503 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:24:59.742723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:24:59.748902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:24:59.750283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:24:59.752279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:24:59.761754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:24:59.761955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:24:59.777822 ignition[1137]: INFO : Ignition 2.19.0 Nov 8 00:24:59.777822 ignition[1137]: INFO : Stage: umount Nov 8 00:24:59.777822 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.777822 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:24:59.771133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:24:59.790456 ignition[1137]: INFO : umount: umount passed Nov 8 00:24:59.790456 ignition[1137]: INFO : Ignition finished successfully Nov 8 00:24:59.771259 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:24:59.780287 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:24:59.780491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:24:59.787939 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:24:59.788034 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:24:59.811071 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:24:59.811145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:24:59.816300 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:24:59.816357 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:24:59.821366 systemd[1]: Stopped target network.target - Network. Nov 8 00:24:59.823481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:24:59.823547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:24:59.828695 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:24:59.842028 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:24:59.844785 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:24:59.852281 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:24:59.856942 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:24:59.861615 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:24:59.861674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:24:59.866121 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:24:59.866162 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:24:59.871063 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:24:59.871117 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:24:59.875745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:24:59.875795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:24:59.878708 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:24:59.883850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:24:59.889650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:24:59.900605 systemd-networkd[887]: eth0: DHCPv6 lease lost Nov 8 00:24:59.902525 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:24:59.902640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:24:59.908142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:24:59.908223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:24:59.927695 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:24:59.932836 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:24:59.932908 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:24:59.941833 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:24:59.945864 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:24:59.945969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:24:59.963837 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:24:59.966369 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:24:59.974299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:24:59.974641 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:24:59.979713 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:24:59.979757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:24:59.987938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:24:59.987996 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:24:59.998120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:24:59.998186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:25:00.003786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:25:00.003831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:25:00.015547 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: Data path switched from VF: enP62799s1 Nov 8 00:25:00.023728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:25:00.026635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:25:00.026706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:25:00.029375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:25:00.029430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:25:00.032911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:25:00.032956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:25:00.054247 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:25:00.054317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:25:00.060184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:25:00.060241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:25:00.072557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:25:00.072627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:25:00.081487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:25:00.081562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:00.087422 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:25:00.087520 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:25:00.092744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:25:00.092833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:25:00.610515 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:25:00.610729 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:25:00.617446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:25:00.621622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:25:00.621691 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:25:00.637770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:25:00.741558 systemd[1]: Switching root. Nov 8 00:25:00.774576 systemd-journald[176]: Journal stopped Nov 8 00:25:02.916957 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 8 00:25:02.916987 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:25:02.916999 kernel: SELinux: policy capability open_perms=1 Nov 8 00:25:02.917010 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:25:02.917018 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:25:02.917028 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:25:02.917038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:25:02.917050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:25:02.917060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:25:02.917068 kernel: audit: type=1403 audit(1762561501.396:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:25:02.917080 systemd[1]: Successfully loaded SELinux policy in 63.824ms. Nov 8 00:25:02.917091 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.985ms. Nov 8 00:25:02.917104 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:25:02.917115 systemd[1]: Detected virtualization microsoft. Nov 8 00:25:02.917130 systemd[1]: Detected architecture x86-64. Nov 8 00:25:02.917140 systemd[1]: Detected first boot. Nov 8 00:25:02.917152 systemd[1]: Hostname set to . Nov 8 00:25:02.917162 systemd[1]: Initializing machine ID from random generator. Nov 8 00:25:02.917174 zram_generator::config[1197]: No configuration found. Nov 8 00:25:02.917188 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:25:02.917199 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:25:02.917209 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:25:02.917221 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:25:02.917234 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:25:02.917245 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:25:02.917260 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:25:02.917274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:25:02.917286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:25:02.917300 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:25:02.917311 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:25:02.917323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:25:02.917336 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:25:02.917348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:25:02.917361 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:25:02.917376 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:25:02.917387 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:25:02.917399 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:25:02.917409 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:25:02.917421 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:25:02.917435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:25:02.917451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:25:02.917464 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:25:02.917480 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:25:02.917495 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:25:02.917511 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:25:02.917527 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:25:02.917553 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:25:02.917569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:25:02.917585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:25:02.917604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:25:02.917620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:25:02.917638 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:25:02.917654 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:25:02.917670 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:25:02.917690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:02.917707 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:25:02.917724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:25:02.917741 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:25:02.917757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:25:02.917773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:02.917788 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:25:02.917806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:25:02.917826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:02.917842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:25:02.917860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:02.917877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:25:02.917894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:02.917912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:25:02.917929 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:25:02.917948 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:25:02.917968 kernel: fuse: init (API version 7.39) Nov 8 00:25:02.917985 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:25:02.918003 kernel: ACPI: bus type drm_connector registered Nov 8 00:25:02.918018 kernel: loop: module loaded Nov 8 00:25:02.918034 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:25:02.918053 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:25:02.918073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:25:02.918116 systemd-journald[1319]: Collecting audit messages is disabled. Nov 8 00:25:02.918153 systemd-journald[1319]: Journal started Nov 8 00:25:02.918192 systemd-journald[1319]: Runtime Journal (/run/log/journal/b7493b7f906741baae4e0bab526c108d) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:25:02.926573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:25:02.943357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:02.943409 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:25:02.949463 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:25:02.952502 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:25:02.955674 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:25:02.958371 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:25:02.961521 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:25:02.964688 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:25:02.967808 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:25:02.971710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:25:02.975886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:25:02.976235 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:25:02.980335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:02.980816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:02.984666 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:25:02.984859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:25:02.988359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:02.988801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:02.992811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:25:02.993132 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:25:02.996917 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:02.997202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:03.000869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:25:03.004481 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:25:03.008519 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:25:03.028008 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:25:03.037677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:25:03.044634 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:25:03.049906 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:25:03.063707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:25:03.068099 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:25:03.071219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:25:03.075695 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:25:03.079276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:25:03.082766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:25:03.094166 systemd-journald[1319]: Time spent on flushing to /var/log/journal/b7493b7f906741baae4e0bab526c108d is 119.006ms for 944 entries. Nov 8 00:25:03.094166 systemd-journald[1319]: System Journal (/var/log/journal/b7493b7f906741baae4e0bab526c108d) is 11.8M, max 2.6G, 2.6G free. Nov 8 00:25:03.253689 systemd-journald[1319]: Received client request to flush runtime journal. Nov 8 00:25:03.253752 systemd-journald[1319]: /var/log/journal/b7493b7f906741baae4e0bab526c108d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Nov 8 00:25:03.253840 systemd-journald[1319]: Rotating system journal. Nov 8 00:25:03.094719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:25:03.106737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:25:03.113755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:25:03.117130 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:25:03.130742 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:25:03.138272 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:25:03.147491 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:25:03.170698 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Nov 8 00:25:03.170711 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Nov 8 00:25:03.173732 udevadm[1364]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:25:03.179906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:25:03.183518 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:25:03.199770 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:25:03.258132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:25:03.303482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:25:03.311958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:25:03.343012 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 8 00:25:03.343397 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 8 00:25:03.349730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:25:03.780185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:25:03.787215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:25:03.823452 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Nov 8 00:25:03.886526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:25:03.900715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:25:03.947755 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:25:03.961426 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:25:04.038564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:25:04.080622 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:25:04.121557 kernel: hv_vmbus: registering driver hyperv_fb Nov 8 00:25:04.133563 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 8 00:25:04.137552 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 8 00:25:04.140562 kernel: hv_vmbus: registering driver hv_balloon Nov 8 00:25:04.147420 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:25:04.147544 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 8 00:25:04.157167 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:25:04.175517 systemd-networkd[1393]: lo: Link UP Nov 8 00:25:04.176231 systemd-networkd[1393]: lo: Gained carrier Nov 8 00:25:04.180421 systemd-networkd[1393]: Enumeration completed Nov 8 00:25:04.181739 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:25:04.185217 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:25:04.185322 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:25:04.289998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1396) Nov 8 00:25:04.296737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:25:04.342002 kernel: mlx5_core f54f:00:02.0 enP62799s1: Link up Nov 8 00:25:04.337188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:25:04.361043 kernel: hv_netvsc 000d3ab3-9efe-000d-3ab3-9efe000d3ab3 eth0: Data path switched to VF: enP62799s1 Nov 8 00:25:04.367690 systemd-networkd[1393]: enP62799s1: Link UP Nov 8 00:25:04.367946 systemd-networkd[1393]: eth0: Link UP Nov 8 00:25:04.372566 systemd-networkd[1393]: eth0: Gained carrier Nov 8 00:25:04.372675 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:25:04.377240 systemd-networkd[1393]: enP62799s1: Gained carrier Nov 8 00:25:04.377339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:25:04.416818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:25:04.417151 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:04.423257 systemd-networkd[1393]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:25:04.495575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:25:04.503774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:25:04.504070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:04.513078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:25:04.612551 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 8 00:25:04.646199 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:25:04.658663 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:25:04.687357 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:25:04.705395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:04.718548 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:25:04.721962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:25:04.730704 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:25:04.735881 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:25:04.772672 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:25:04.776958 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:25:04.780260 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:25:04.780393 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:25:04.783000 systemd[1]: Reached target machines.target - Containers. Nov 8 00:25:04.787041 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:25:04.797719 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:25:04.802049 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:25:04.804777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:04.807628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:25:04.813751 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:25:04.820425 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:25:04.824687 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:25:04.857757 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:25:04.852770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:25:04.864802 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:25:04.865768 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:25:04.985031 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:25:05.016552 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:25:05.147563 kernel: loop2: detected capacity change from 0 to 31056 Nov 8 00:25:05.280553 kernel: loop3: detected capacity change from 0 to 224512 Nov 8 00:25:05.315555 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:25:05.345560 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:25:05.370559 kernel: loop6: detected capacity change from 0 to 31056 Nov 8 00:25:05.382551 kernel: loop7: detected capacity change from 0 to 224512 Nov 8 00:25:05.397457 (sd-merge)[1505]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 8 00:25:05.400472 (sd-merge)[1505]: Merged extensions into '/usr'. Nov 8 00:25:05.404317 systemd[1]: Reloading requested from client PID 1492 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:25:05.404332 systemd[1]: Reloading... Nov 8 00:25:05.478723 zram_generator::config[1529]: No configuration found. Nov 8 00:25:05.679656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:05.760177 systemd[1]: Reloading finished in 355 ms. Nov 8 00:25:05.778042 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:25:05.794759 systemd[1]: Starting ensure-sysext.service... Nov 8 00:25:05.801699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:25:05.817610 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:25:05.817631 systemd[1]: Reloading... Nov 8 00:25:05.842945 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:25:05.845975 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:25:05.847761 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:25:05.848306 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Nov 8 00:25:05.848487 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Nov 8 00:25:05.857509 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:25:05.858006 systemd-tmpfiles[1598]: Skipping /boot Nov 8 00:25:05.875235 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:25:05.877711 systemd-tmpfiles[1598]: Skipping /boot Nov 8 00:25:05.916712 zram_generator::config[1624]: No configuration found. Nov 8 00:25:06.074736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:06.156850 systemd[1]: Reloading finished in 338 ms. Nov 8 00:25:06.174377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:25:06.198748 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:25:06.205151 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:25:06.213717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:25:06.231735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:25:06.251617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:25:06.261828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.262132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:06.270853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:06.284924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:06.301516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:06.312375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:06.313273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.318945 augenrules[1719]: No rules Nov 8 00:25:06.321750 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:25:06.326048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:25:06.337013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:06.337358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:06.342717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:06.343045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:06.347982 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:06.348338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:06.362640 systemd-networkd[1393]: eth0: Gained IPv6LL Nov 8 00:25:06.371969 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:25:06.379152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:25:06.390030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.390311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:06.395284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:06.408941 systemd-resolved[1704]: Positive Trust Anchors: Nov 8 00:25:06.408961 systemd-resolved[1704]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:25:06.409008 systemd-resolved[1704]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:25:06.409649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:06.420599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:06.426920 systemd-resolved[1704]: Using system hostname 'ci-4081.3.6-n-75d3e74165'. Nov 8 00:25:06.428739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:06.429658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.431161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:06.431380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:06.435729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:25:06.439056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:06.439239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:06.443014 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:06.443192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:06.455742 systemd[1]: Reached target network.target - Network. Nov 8 00:25:06.458498 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:25:06.461396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:25:06.465138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.465581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:06.470812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:06.478375 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:25:06.489467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:06.496577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:06.501255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:06.502632 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:25:06.506072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:06.511037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:06.511234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:06.515488 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:25:06.515733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:25:06.519311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:06.519551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:06.523723 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:06.523964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:06.530692 systemd[1]: Finished ensure-sysext.service. Nov 8 00:25:06.538524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:25:06.538618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:25:06.553676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:25:06.557821 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:25:06.728811 ldconfig[1488]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:25:06.742616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:25:06.751806 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:25:06.764243 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:25:06.767960 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:25:06.770850 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:25:06.774080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:25:06.777453 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:25:06.780480 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:25:06.783803 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:25:06.786964 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:25:06.787009 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:25:06.789433 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:25:06.793115 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:25:06.797788 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:25:06.801932 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:25:06.806437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:25:06.809430 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:25:06.811955 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:25:06.815143 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:25:06.815303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:25:06.815420 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:25:06.872663 systemd[1]: Starting chronyd.service - NTP client/server... Nov 8 00:25:06.878639 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:25:06.883703 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:25:06.902746 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:25:06.910258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:25:06.916715 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:25:06.926699 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:25:06.926753 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 8 00:25:06.934352 (chronyd)[1772]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 8 00:25:06.935278 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 8 00:25:06.938612 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 8 00:25:06.943037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:06.950695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:25:06.952676 jq[1777]: false Nov 8 00:25:06.960157 chronyd[1787]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 8 00:25:06.965401 KVP[1781]: KVP starting; pid is:1781 Nov 8 00:25:06.965803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:25:06.969867 chronyd[1787]: Timezone right/UTC failed leap second check, ignoring Nov 8 00:25:06.971555 chronyd[1787]: Loaded seccomp filter (level 2) Nov 8 00:25:06.976905 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:25:06.985759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:25:06.990598 kernel: hv_utils: KVP IC version 4.0 Nov 8 00:25:06.990606 KVP[1781]: KVP LIC Version: 3.1 Nov 8 00:25:07.004722 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:25:07.008924 dbus-daemon[1776]: [system] SELinux support is enabled Nov 8 00:25:07.017726 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:25:07.021312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:25:07.030990 extend-filesystems[1780]: Found loop4 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found loop5 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found loop6 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found loop7 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda1 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda2 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda3 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found usr Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda4 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda6 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda7 Nov 8 00:25:07.030990 extend-filesystems[1780]: Found sda9 Nov 8 00:25:07.030990 extend-filesystems[1780]: Checking size of /dev/sda9 Nov 8 00:25:07.154043 extend-filesystems[1780]: Old size kept for /dev/sda9 Nov 8 00:25:07.154043 extend-filesystems[1780]: Found sr0 Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.060 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.063 INFO Fetch successful Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.064 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.072 INFO Fetch successful Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.073 INFO Fetching http://168.63.129.16/machine/09eda13b-46ea-42f6-b5d9-b080b02531e2/eac3de62%2D9160%2D4b72%2Da7c9%2D620b9b422427.%5Fci%2D4081.3.6%2Dn%2D75d3e74165?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.075 INFO Fetch successful Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.075 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:25:07.163043 coreos-metadata[1774]: Nov 08 00:25:07.102 INFO Fetch successful Nov 8 00:25:07.037729 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:25:07.088718 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:25:07.109971 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:25:07.163861 jq[1816]: true Nov 8 00:25:07.130231 systemd[1]: Started chronyd.service - NTP client/server. Nov 8 00:25:07.145768 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:25:07.146116 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:25:07.146484 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:25:07.149869 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:25:07.174889 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:25:07.175203 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:25:07.182294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:25:07.201465 update_engine[1805]: I20251108 00:25:07.201363 1805 main.cc:92] Flatcar Update Engine starting Nov 8 00:25:07.203832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:25:07.204158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:25:07.215245 update_engine[1805]: I20251108 00:25:07.215042 1805 update_check_scheduler.cc:74] Next update check in 6m52s Nov 8 00:25:07.236692 jq[1840]: true Nov 8 00:25:07.260560 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1834) Nov 8 00:25:07.264372 (ntainerd)[1841]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:25:07.281409 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:25:07.281468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:25:07.285520 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:25:07.285565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:25:07.289136 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:25:07.293053 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:25:07.296062 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:25:07.296902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:25:07.301834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:25:07.370903 systemd-logind[1801]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:25:07.382365 tar[1837]: linux-amd64/LICENSE Nov 8 00:25:07.382365 tar[1837]: linux-amd64/helm Nov 8 00:25:07.382649 systemd-logind[1801]: New seat seat0. Nov 8 00:25:07.397961 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:25:07.415064 bash[1886]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:25:07.407788 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:25:07.423741 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:25:07.504354 locksmithd[1874]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:25:08.075206 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:25:08.078317 containerd[1841]: time="2025-11-08T00:25:08.078233700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:25:08.111079 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:25:08.123824 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:25:08.134238 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 8 00:25:08.157090 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:25:08.157448 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:25:08.179869 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:25:08.207722 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 8 00:25:08.222783 containerd[1841]: time="2025-11-08T00:25:08.222585700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.225597 containerd[1841]: time="2025-11-08T00:25:08.225517700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:08.225709 containerd[1841]: time="2025-11-08T00:25:08.225690500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:25:08.225804 containerd[1841]: time="2025-11-08T00:25:08.225788600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:25:08.226031 containerd[1841]: time="2025-11-08T00:25:08.226011500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:25:08.226129 containerd[1841]: time="2025-11-08T00:25:08.226111200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.226283 containerd[1841]: time="2025-11-08T00:25:08.226261200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:08.226357 containerd[1841]: time="2025-11-08T00:25:08.226340900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.226756 containerd[1841]: time="2025-11-08T00:25:08.226727300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:08.226844 containerd[1841]: time="2025-11-08T00:25:08.226828800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.226933 containerd[1841]: time="2025-11-08T00:25:08.226912300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:08.227001 containerd[1841]: time="2025-11-08T00:25:08.226987900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.227172 containerd[1841]: time="2025-11-08T00:25:08.227154100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.227476 containerd[1841]: time="2025-11-08T00:25:08.227454200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:08.227841 containerd[1841]: time="2025-11-08T00:25:08.227813000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:08.228173 containerd[1841]: time="2025-11-08T00:25:08.227958700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:25:08.228173 containerd[1841]: time="2025-11-08T00:25:08.228082400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:25:08.228173 containerd[1841]: time="2025-11-08T00:25:08.228142800Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:25:08.229490 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:25:08.243666 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:25:08.255592 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:25:08.258246 containerd[1841]: time="2025-11-08T00:25:08.258202900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:25:08.258502 containerd[1841]: time="2025-11-08T00:25:08.258383700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:25:08.258502 containerd[1841]: time="2025-11-08T00:25:08.258416700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:25:08.258502 containerd[1841]: time="2025-11-08T00:25:08.258459300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:25:08.258502 containerd[1841]: time="2025-11-08T00:25:08.258480500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:25:08.258884 containerd[1841]: time="2025-11-08T00:25:08.258689600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:25:08.259665 containerd[1841]: time="2025-11-08T00:25:08.259563900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259742300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259787000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259807800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259827700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259845500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259885300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.259910 containerd[1841]: time="2025-11-08T00:25:08.259906300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.259925200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.259957400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.259976000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.259993400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.260031700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260163 containerd[1841]: time="2025-11-08T00:25:08.260072500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260192500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260217000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260235500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260267700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260285900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260305200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260323600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260370 containerd[1841]: time="2025-11-08T00:25:08.260356600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260373600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260400600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260442800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260468300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260511000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260547200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260565200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:25:08.260681 containerd[1841]: time="2025-11-08T00:25:08.260656100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:25:08.260876 containerd[1841]: time="2025-11-08T00:25:08.260680500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261127100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261157400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261173200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261205400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261222300Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:25:08.263006 containerd[1841]: time="2025-11-08T00:25:08.261239500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:25:08.261782 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:25:08.270569 containerd[1841]: time="2025-11-08T00:25:08.268858000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:25:08.270569 containerd[1841]: time="2025-11-08T00:25:08.268965300Z" level=info msg="Connect containerd service" Nov 8 00:25:08.270569 containerd[1841]: time="2025-11-08T00:25:08.269033100Z" level=info msg="using legacy CRI server" Nov 8 00:25:08.270569 containerd[1841]: time="2025-11-08T00:25:08.269046600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:25:08.270569 containerd[1841]: time="2025-11-08T00:25:08.269208100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271235100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271386800Z" level=info msg="Start subscribing containerd event" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271461400Z" level=info msg="Start recovering state" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271630000Z" level=info msg="Start event monitor" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271654300Z" level=info msg="Start snapshots syncer" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271666300Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:25:08.271930 containerd[1841]: time="2025-11-08T00:25:08.271677100Z" level=info msg="Start streaming server" Nov 8 00:25:08.273059 containerd[1841]: time="2025-11-08T00:25:08.273035200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:25:08.273118 containerd[1841]: time="2025-11-08T00:25:08.273103000Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:25:08.273746 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:25:08.286548 containerd[1841]: time="2025-11-08T00:25:08.286012100Z" level=info msg="containerd successfully booted in 0.210960s" Nov 8 00:25:08.462743 tar[1837]: linux-amd64/README.md Nov 8 00:25:08.479038 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:25:08.981711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:08.985824 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:25:08.989349 systemd[1]: Startup finished in 595ms (firmware) + 5.175s (loader) + 8.785s (kernel) + 7.654s (userspace) = 22.210s. Nov 8 00:25:08.996080 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:09.157756 login[1946]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:25:09.160672 login[1947]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:25:09.179753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:25:09.181784 systemd-logind[1801]: New session 2 of user core. Nov 8 00:25:09.187855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:25:09.194171 systemd-logind[1801]: New session 1 of user core. Nov 8 00:25:09.222742 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:25:09.234249 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:25:09.239065 (systemd)[1973]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:25:09.279993 waagent[1942]: 2025-11-08T00:25:09.279492Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 8 00:25:09.283595 waagent[1942]: 2025-11-08T00:25:09.282854Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 8 00:25:09.285697 waagent[1942]: 2025-11-08T00:25:09.285608Z INFO Daemon Daemon Python: 3.11.9 Nov 8 00:25:09.288192 waagent[1942]: 2025-11-08T00:25:09.288085Z INFO Daemon Daemon Run daemon Nov 8 00:25:09.294566 waagent[1942]: 2025-11-08T00:25:09.292666Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 8 00:25:09.299544 waagent[1942]: 2025-11-08T00:25:09.297299Z INFO Daemon Daemon Using waagent for provisioning Nov 8 00:25:09.302545 waagent[1942]: 2025-11-08T00:25:09.300342Z INFO Daemon Daemon Activate resource disk Nov 8 00:25:09.305543 waagent[1942]: 2025-11-08T00:25:09.302821Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 8 00:25:09.322975 waagent[1942]: 2025-11-08T00:25:09.322910Z INFO Daemon Daemon Found device: None Nov 8 00:25:09.323337 waagent[1942]: 2025-11-08T00:25:09.323290Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 8 00:25:09.324416 waagent[1942]: 2025-11-08T00:25:09.324379Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 8 00:25:09.330779 waagent[1942]: 2025-11-08T00:25:09.329730Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:25:09.330779 waagent[1942]: 2025-11-08T00:25:09.330389Z INFO Daemon Daemon Running default provisioning handler Nov 8 00:25:09.346458 waagent[1942]: 2025-11-08T00:25:09.346056Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 8 00:25:09.348593 waagent[1942]: 2025-11-08T00:25:09.348526Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 8 00:25:09.349035 waagent[1942]: 2025-11-08T00:25:09.348999Z INFO Daemon Daemon cloud-init is enabled: False Nov 8 00:25:09.349448 waagent[1942]: 2025-11-08T00:25:09.349416Z INFO Daemon Daemon Copying ovf-env.xml Nov 8 00:25:09.423560 waagent[1942]: 2025-11-08T00:25:09.419399Z INFO Daemon Daemon Successfully mounted dvd Nov 8 00:25:09.441083 waagent[1942]: 2025-11-08T00:25:09.437091Z INFO Daemon Daemon Detect protocol endpoint Nov 8 00:25:09.441083 waagent[1942]: 2025-11-08T00:25:09.439764Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:25:09.438281 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 8 00:25:09.442912 waagent[1942]: 2025-11-08T00:25:09.442847Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 8 00:25:09.446036 waagent[1942]: 2025-11-08T00:25:09.445977Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 8 00:25:09.448879 waagent[1942]: 2025-11-08T00:25:09.448824Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 8 00:25:09.452066 waagent[1942]: 2025-11-08T00:25:09.452012Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 8 00:25:09.463271 systemd[1973]: Queued start job for default target default.target. Nov 8 00:25:09.464894 systemd[1973]: Created slice app.slice - User Application Slice. Nov 8 00:25:09.464925 systemd[1973]: Reached target paths.target - Paths. Nov 8 00:25:09.464943 systemd[1973]: Reached target timers.target - Timers. Nov 8 00:25:09.469457 systemd[1973]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:25:09.484566 waagent[1942]: 2025-11-08T00:25:09.481165Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 8 00:25:09.482141 systemd[1973]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:25:09.482203 systemd[1973]: Reached target sockets.target - Sockets. Nov 8 00:25:09.482217 systemd[1973]: Reached target basic.target - Basic System. Nov 8 00:25:09.482255 systemd[1973]: Reached target default.target - Main User Target. Nov 8 00:25:09.482285 systemd[1973]: Startup finished in 235ms. Nov 8 00:25:09.482413 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:25:09.488275 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:25:09.491037 waagent[1942]: 2025-11-08T00:25:09.489566Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 8 00:25:09.491037 waagent[1942]: 2025-11-08T00:25:09.489867Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 8 00:25:09.489061 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:25:09.628464 waagent[1942]: 2025-11-08T00:25:09.627844Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 8 00:25:09.631995 waagent[1942]: 2025-11-08T00:25:09.631348Z INFO Daemon Daemon Forcing an update of the goal state. Nov 8 00:25:09.637424 waagent[1942]: 2025-11-08T00:25:09.637366Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:25:09.653654 waagent[1942]: 2025-11-08T00:25:09.653479Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 8 00:25:09.656813 waagent[1942]: 2025-11-08T00:25:09.656458Z INFO Daemon Nov 8 00:25:09.659437 waagent[1942]: 2025-11-08T00:25:09.658851Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e2cebc61-2d47-4586-bba7-4738f19d4c70 eTag: 8013174588431386044 source: Fabric] Nov 8 00:25:09.664628 waagent[1942]: 2025-11-08T00:25:09.664574Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 8 00:25:09.669057 waagent[1942]: 2025-11-08T00:25:09.668281Z INFO Daemon Nov 8 00:25:09.670204 waagent[1942]: 2025-11-08T00:25:09.669779Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:25:09.678551 waagent[1942]: 2025-11-08T00:25:09.677325Z INFO Daemon Daemon Downloading artifacts profile blob Nov 8 00:25:09.748769 waagent[1942]: 2025-11-08T00:25:09.748651Z INFO Daemon Downloaded certificate {'thumbprint': '8682B0CCC4DD1D0D2AD389C05211A23083886882', 'hasPrivateKey': True} Nov 8 00:25:09.749623 waagent[1942]: 2025-11-08T00:25:09.749570Z INFO Daemon Fetch goal state completed Nov 8 00:25:09.757183 waagent[1942]: 2025-11-08T00:25:09.757139Z INFO Daemon Daemon Starting provisioning Nov 8 00:25:09.757574 waagent[1942]: 2025-11-08T00:25:09.757487Z INFO Daemon Daemon Handle ovf-env.xml. Nov 8 00:25:09.758387 waagent[1942]: 2025-11-08T00:25:09.758348Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-75d3e74165] Nov 8 00:25:09.760991 waagent[1942]: 2025-11-08T00:25:09.760949Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-75d3e74165] Nov 8 00:25:09.761990 waagent[1942]: 2025-11-08T00:25:09.761949Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 8 00:25:09.762888 waagent[1942]: 2025-11-08T00:25:09.762849Z INFO Daemon Daemon Primary interface is [eth0] Nov 8 00:25:09.777471 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:25:09.779051 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:25:09.779099 systemd-networkd[1393]: eth0: DHCP lease lost Nov 8 00:25:09.780368 waagent[1942]: 2025-11-08T00:25:09.779812Z INFO Daemon Daemon Create user account if not exists Nov 8 00:25:09.782641 systemd-networkd[1393]: eth0: DHCPv6 lease lost Nov 8 00:25:09.783466 waagent[1942]: 2025-11-08T00:25:09.782591Z INFO Daemon Daemon User core already exists, skip useradd Nov 8 00:25:09.783739 waagent[1942]: 2025-11-08T00:25:09.783680Z INFO Daemon Daemon Configure sudoer Nov 8 00:25:09.784508 waagent[1942]: 2025-11-08T00:25:09.784464Z INFO Daemon Daemon Configure sshd Nov 8 00:25:09.785272 waagent[1942]: 2025-11-08T00:25:09.785232Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 8 00:25:09.786671 waagent[1942]: 2025-11-08T00:25:09.785905Z INFO Daemon Daemon Deploy ssh public key. Nov 8 00:25:09.836592 systemd-networkd[1393]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:25:09.929139 kubelet[1963]: E1108 00:25:09.929084 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:09.931647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:09.932001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:20.019095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:25:20.025194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:20.151713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:20.151969 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:20.785415 kubelet[2043]: E1108 00:25:20.785356 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:20.789243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:20.789574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:30.767035 chronyd[1787]: Selected source PHC0 Nov 8 00:25:31.019027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:25:31.030766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:31.140701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:31.140959 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:31.840625 kubelet[2063]: E1108 00:25:31.840572 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:31.842939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:31.843253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:39.859877 waagent[1942]: 2025-11-08T00:25:39.859797Z INFO Daemon Daemon Provisioning complete Nov 8 00:25:39.872226 waagent[1942]: 2025-11-08T00:25:39.872178Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 8 00:25:39.879190 waagent[1942]: 2025-11-08T00:25:39.872439Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 8 00:25:39.879190 waagent[1942]: 2025-11-08T00:25:39.873005Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 8 00:25:39.995786 waagent[2071]: 2025-11-08T00:25:39.995690Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 8 00:25:39.996242 waagent[2071]: 2025-11-08T00:25:39.995843Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 8 00:25:39.996242 waagent[2071]: 2025-11-08T00:25:39.995923Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 8 00:25:40.018897 waagent[2071]: 2025-11-08T00:25:40.018825Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 8 00:25:40.019103 waagent[2071]: 2025-11-08T00:25:40.019056Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:25:40.019196 waagent[2071]: 2025-11-08T00:25:40.019158Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:25:40.027019 waagent[2071]: 2025-11-08T00:25:40.026955Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:25:40.036805 waagent[2071]: 2025-11-08T00:25:40.036754Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 8 00:25:40.037262 waagent[2071]: 2025-11-08T00:25:40.037205Z INFO ExtHandler Nov 8 00:25:40.037341 waagent[2071]: 2025-11-08T00:25:40.037298Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5047f2b0-7fda-43d5-9c42-187ee7214fe9 eTag: 8013174588431386044 source: Fabric] Nov 8 00:25:40.037675 waagent[2071]: 2025-11-08T00:25:40.037625Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 8 00:25:40.038230 waagent[2071]: 2025-11-08T00:25:40.038175Z INFO ExtHandler Nov 8 00:25:40.038308 waagent[2071]: 2025-11-08T00:25:40.038259Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:25:40.041888 waagent[2071]: 2025-11-08T00:25:40.041835Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 8 00:25:40.101485 waagent[2071]: 2025-11-08T00:25:40.101405Z INFO ExtHandler Downloaded certificate {'thumbprint': '8682B0CCC4DD1D0D2AD389C05211A23083886882', 'hasPrivateKey': True} Nov 8 00:25:40.102007 waagent[2071]: 2025-11-08T00:25:40.101951Z INFO ExtHandler Fetch goal state completed Nov 8 00:25:40.115848 waagent[2071]: 2025-11-08T00:25:40.115750Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2071 Nov 8 00:25:40.115961 waagent[2071]: 2025-11-08T00:25:40.115913Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 8 00:25:40.117489 waagent[2071]: 2025-11-08T00:25:40.117433Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 8 00:25:40.117864 waagent[2071]: 2025-11-08T00:25:40.117814Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 8 00:25:40.130276 waagent[2071]: 2025-11-08T00:25:40.130240Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 8 00:25:40.130443 waagent[2071]: 2025-11-08T00:25:40.130402Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 8 00:25:40.136867 waagent[2071]: 2025-11-08T00:25:40.136757Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 8 00:25:40.143499 systemd[1]: Reloading requested from client PID 2084 ('systemctl') (unit waagent.service)... Nov 8 00:25:40.143515 systemd[1]: Reloading... Nov 8 00:25:40.232179 zram_generator::config[2114]: No configuration found. Nov 8 00:25:40.356610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:40.437048 systemd[1]: Reloading finished in 292 ms. Nov 8 00:25:40.459722 waagent[2071]: 2025-11-08T00:25:40.459612Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 8 00:25:40.466745 systemd[1]: Reloading requested from client PID 2180 ('systemctl') (unit waagent.service)... Nov 8 00:25:40.466761 systemd[1]: Reloading... Nov 8 00:25:40.560597 zram_generator::config[2215]: No configuration found. Nov 8 00:25:40.678545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:40.758203 systemd[1]: Reloading finished in 291 ms. Nov 8 00:25:40.782131 waagent[2071]: 2025-11-08T00:25:40.780798Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 8 00:25:40.782131 waagent[2071]: 2025-11-08T00:25:40.780985Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 8 00:25:40.899424 waagent[2071]: 2025-11-08T00:25:40.899335Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 8 00:25:40.900007 waagent[2071]: 2025-11-08T00:25:40.899942Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 8 00:25:40.900823 waagent[2071]: 2025-11-08T00:25:40.900742Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 8 00:25:40.901338 waagent[2071]: 2025-11-08T00:25:40.901291Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:25:40.901500 waagent[2071]: 2025-11-08T00:25:40.901340Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 8 00:25:40.901603 waagent[2071]: 2025-11-08T00:25:40.901524Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:25:40.901747 waagent[2071]: 2025-11-08T00:25:40.901706Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:25:40.901924 waagent[2071]: 2025-11-08T00:25:40.901875Z INFO EnvHandler ExtHandler Configure routes Nov 8 00:25:40.902013 waagent[2071]: 2025-11-08T00:25:40.901976Z INFO EnvHandler ExtHandler Gateway:None Nov 8 00:25:40.902088 waagent[2071]: 2025-11-08T00:25:40.902054Z INFO EnvHandler ExtHandler Routes:None Nov 8 00:25:40.904129 waagent[2071]: 2025-11-08T00:25:40.902767Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:25:40.904129 waagent[2071]: 2025-11-08T00:25:40.903053Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 8 00:25:40.904129 waagent[2071]: 2025-11-08T00:25:40.903285Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 8 00:25:40.904129 waagent[2071]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 8 00:25:40.904129 waagent[2071]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 8 00:25:40.904129 waagent[2071]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 8 00:25:40.904129 waagent[2071]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:25:40.904129 waagent[2071]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:25:40.904129 waagent[2071]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:25:40.904129 waagent[2071]: 2025-11-08T00:25:40.903458Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 8 00:25:40.904129 waagent[2071]: 2025-11-08T00:25:40.903613Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 8 00:25:40.904559 waagent[2071]: 2025-11-08T00:25:40.904491Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 8 00:25:40.904627 waagent[2071]: 2025-11-08T00:25:40.904569Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 8 00:25:40.904820 waagent[2071]: 2025-11-08T00:25:40.904770Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 8 00:25:40.914302 waagent[2071]: 2025-11-08T00:25:40.914250Z INFO ExtHandler ExtHandler Nov 8 00:25:40.915844 waagent[2071]: 2025-11-08T00:25:40.914357Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: eae3798b-781b-450c-bca0-7db79820ee5d correlation 9557e2bd-8736-4d0b-b39e-6566d8b2467e created: 2025-11-08T00:24:36.728233Z] Nov 8 00:25:40.915844 waagent[2071]: 2025-11-08T00:25:40.914855Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 8 00:25:40.915844 waagent[2071]: 2025-11-08T00:25:40.915643Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 8 00:25:40.928810 waagent[2071]: 2025-11-08T00:25:40.928754Z INFO MonitorHandler ExtHandler Network interfaces: Nov 8 00:25:40.928810 waagent[2071]: Executing ['ip', '-a', '-o', 'link']: Nov 8 00:25:40.928810 waagent[2071]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 8 00:25:40.928810 waagent[2071]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:9e:fe brd ff:ff:ff:ff:ff:ff Nov 8 00:25:40.928810 waagent[2071]: 3: enP62799s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:9e:fe brd ff:ff:ff:ff:ff:ff\ altname enP62799p0s2 Nov 8 00:25:40.928810 waagent[2071]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 8 00:25:40.928810 waagent[2071]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 8 00:25:40.928810 waagent[2071]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 8 00:25:40.928810 waagent[2071]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 8 00:25:40.928810 waagent[2071]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 8 00:25:40.928810 waagent[2071]: 2: eth0 inet6 fe80::20d:3aff:feb3:9efe/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 8 00:25:40.951240 waagent[2071]: 2025-11-08T00:25:40.951185Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D2791AB8-DF0E-45F9-BC6F-4FEA1791177C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 8 00:25:40.965269 waagent[2071]: 2025-11-08T00:25:40.965208Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 8 00:25:40.965269 waagent[2071]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.965269 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.965269 waagent[2071]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.965269 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.965269 waagent[2071]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.965269 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.965269 waagent[2071]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:25:40.965269 waagent[2071]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:25:40.965269 waagent[2071]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:25:40.969171 waagent[2071]: 2025-11-08T00:25:40.969108Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 8 00:25:40.969171 waagent[2071]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.969171 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.969171 waagent[2071]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.969171 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.969171 waagent[2071]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:25:40.969171 waagent[2071]: pkts bytes target prot opt in out source destination Nov 8 00:25:40.969171 waagent[2071]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:25:40.969171 waagent[2071]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:25:40.969171 waagent[2071]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:25:40.969569 waagent[2071]: 2025-11-08T00:25:40.969408Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 8 00:25:42.019041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:25:42.024767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:42.132719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:42.136571 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:42.171902 kubelet[2322]: E1108 00:25:42.171825 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:42.174284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:42.174638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:52.269175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:25:52.274786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:52.295561 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 8 00:25:52.429802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:52.430092 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:52.590026 update_engine[1805]: I20251108 00:25:52.589876 1805 update_attempter.cc:509] Updating boot flags... Nov 8 00:25:52.807313 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:25:52.812836 systemd[1]: Started sshd@0-10.200.8.16:22-10.200.16.10:44640.service - OpenSSH per-connection server daemon (10.200.16.10:44640). Nov 8 00:25:53.115107 kubelet[2343]: E1108 00:25:53.115029 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:53.117815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:53.118147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:53.146160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2366) Nov 8 00:25:53.712749 sshd[2351]: Accepted publickey for core from 10.200.16.10 port 44640 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:53.714596 sshd[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:53.720852 systemd-logind[1801]: New session 3 of user core. Nov 8 00:25:53.729870 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:25:54.266106 systemd[1]: Started sshd@1-10.200.8.16:22-10.200.16.10:44642.service - OpenSSH per-connection server daemon (10.200.16.10:44642). Nov 8 00:25:54.885327 sshd[2396]: Accepted publickey for core from 10.200.16.10 port 44642 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:54.887076 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:54.892933 systemd-logind[1801]: New session 4 of user core. Nov 8 00:25:54.901900 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:25:55.344321 sshd[2396]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:55.347795 systemd[1]: sshd@1-10.200.8.16:22-10.200.16.10:44642.service: Deactivated successfully. Nov 8 00:25:55.352546 systemd-logind[1801]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:25:55.353904 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:25:55.355310 systemd-logind[1801]: Removed session 4. Nov 8 00:25:55.458101 systemd[1]: Started sshd@2-10.200.8.16:22-10.200.16.10:44658.service - OpenSSH per-connection server daemon (10.200.16.10:44658). Nov 8 00:25:56.075955 sshd[2404]: Accepted publickey for core from 10.200.16.10 port 44658 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:56.077728 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:56.083748 systemd-logind[1801]: New session 5 of user core. Nov 8 00:25:56.093829 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:25:56.538150 sshd[2404]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:56.541612 systemd[1]: sshd@2-10.200.8.16:22-10.200.16.10:44658.service: Deactivated successfully. Nov 8 00:25:56.546711 systemd-logind[1801]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:25:56.547437 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:25:56.548576 systemd-logind[1801]: Removed session 5. Nov 8 00:25:56.646040 systemd[1]: Started sshd@3-10.200.8.16:22-10.200.16.10:44664.service - OpenSSH per-connection server daemon (10.200.16.10:44664). Nov 8 00:25:57.266484 sshd[2412]: Accepted publickey for core from 10.200.16.10 port 44664 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:57.268225 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:57.273838 systemd-logind[1801]: New session 6 of user core. Nov 8 00:25:57.280014 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:25:57.723733 sshd[2412]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:57.728517 systemd[1]: sshd@3-10.200.8.16:22-10.200.16.10:44664.service: Deactivated successfully. Nov 8 00:25:57.733084 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:25:57.733948 systemd-logind[1801]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:25:57.734879 systemd-logind[1801]: Removed session 6. Nov 8 00:25:57.836108 systemd[1]: Started sshd@4-10.200.8.16:22-10.200.16.10:44680.service - OpenSSH per-connection server daemon (10.200.16.10:44680). Nov 8 00:25:58.452290 sshd[2420]: Accepted publickey for core from 10.200.16.10 port 44680 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:58.454067 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:58.459520 systemd-logind[1801]: New session 7 of user core. Nov 8 00:25:58.468849 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:25:58.830947 sudo[2424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:25:58.831313 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:58.846891 sudo[2424]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:58.949185 sshd[2420]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:58.953008 systemd[1]: sshd@4-10.200.8.16:22-10.200.16.10:44680.service: Deactivated successfully. Nov 8 00:25:58.958238 systemd-logind[1801]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:25:58.959053 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:25:58.960119 systemd-logind[1801]: Removed session 7. Nov 8 00:25:59.061063 systemd[1]: Started sshd@5-10.200.8.16:22-10.200.16.10:44696.service - OpenSSH per-connection server daemon (10.200.16.10:44696). Nov 8 00:25:59.679778 sshd[2429]: Accepted publickey for core from 10.200.16.10 port 44696 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:59.681598 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:59.687460 systemd-logind[1801]: New session 8 of user core. Nov 8 00:25:59.694349 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:26:00.026094 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:26:00.026453 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:00.029873 sudo[2434]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:00.034948 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:26:00.035295 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:00.053858 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:00.055610 auditctl[2437]: No rules Nov 8 00:26:00.056057 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:26:00.056388 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:00.061158 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:00.086704 augenrules[2456]: No rules Nov 8 00:26:00.088516 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:00.092205 sudo[2433]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:00.193970 sshd[2429]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:00.198550 systemd[1]: sshd@5-10.200.8.16:22-10.200.16.10:44696.service: Deactivated successfully. Nov 8 00:26:00.202518 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:26:00.203239 systemd-logind[1801]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:26:00.204328 systemd-logind[1801]: Removed session 8. Nov 8 00:26:00.302060 systemd[1]: Started sshd@6-10.200.8.16:22-10.200.16.10:48618.service - OpenSSH per-connection server daemon (10.200.16.10:48618). Nov 8 00:26:00.921043 sshd[2465]: Accepted publickey for core from 10.200.16.10 port 48618 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:26:00.922507 sshd[2465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:00.927384 systemd-logind[1801]: New session 9 of user core. Nov 8 00:26:00.930835 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:26:01.266745 sudo[2469]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:26:01.267196 sudo[2469]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:26:02.733028 (dockerd)[2485]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:26:02.733031 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:26:03.250126 dockerd[2485]: time="2025-11-08T00:26:03.250063139Z" level=info msg="Starting up" Nov 8 00:26:03.255250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:26:03.262756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:03.466719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:03.471631 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:26:03.507680 kubelet[2506]: E1108 00:26:03.507565 2506 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:26:03.510191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:26:03.510517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:26:04.261933 dockerd[2485]: time="2025-11-08T00:26:04.261885958Z" level=info msg="Loading containers: start." Nov 8 00:26:04.372568 kernel: Initializing XFRM netlink socket Nov 8 00:26:04.444814 systemd-networkd[1393]: docker0: Link UP Nov 8 00:26:04.468732 dockerd[2485]: time="2025-11-08T00:26:04.468691367Z" level=info msg="Loading containers: done." Nov 8 00:26:04.488578 dockerd[2485]: time="2025-11-08T00:26:04.488514970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:26:04.488739 dockerd[2485]: time="2025-11-08T00:26:04.488633771Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:26:04.488787 dockerd[2485]: time="2025-11-08T00:26:04.488762572Z" level=info msg="Daemon has completed initialization" Nov 8 00:26:04.582788 dockerd[2485]: time="2025-11-08T00:26:04.582668230Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:26:04.583124 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:26:05.736018 containerd[1841]: time="2025-11-08T00:26:05.735976692Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:26:06.527355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695908667.mount: Deactivated successfully. Nov 8 00:26:08.122971 containerd[1841]: time="2025-11-08T00:26:08.122915435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:08.126029 containerd[1841]: time="2025-11-08T00:26:08.125843065Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 8 00:26:08.129458 containerd[1841]: time="2025-11-08T00:26:08.129377701Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:08.133441 containerd[1841]: time="2025-11-08T00:26:08.133391242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:08.137844 containerd[1841]: time="2025-11-08T00:26:08.136406173Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.400385681s" Nov 8 00:26:08.137844 containerd[1841]: time="2025-11-08T00:26:08.136456873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:26:08.138055 containerd[1841]: time="2025-11-08T00:26:08.138032489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:26:09.740257 containerd[1841]: time="2025-11-08T00:26:09.740206929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.742509 containerd[1841]: time="2025-11-08T00:26:09.742327951Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 8 00:26:09.745552 containerd[1841]: time="2025-11-08T00:26:09.744692475Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.749100 containerd[1841]: time="2025-11-08T00:26:09.749047319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.750430 containerd[1841]: time="2025-11-08T00:26:09.750044329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.611973539s" Nov 8 00:26:09.750430 containerd[1841]: time="2025-11-08T00:26:09.750085230Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:26:09.751048 containerd[1841]: time="2025-11-08T00:26:09.751022739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:26:11.103476 containerd[1841]: time="2025-11-08T00:26:11.103423027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:11.105568 containerd[1841]: time="2025-11-08T00:26:11.105489249Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 8 00:26:11.108478 containerd[1841]: time="2025-11-08T00:26:11.108414181Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:11.113444 containerd[1841]: time="2025-11-08T00:26:11.113336635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:11.114804 containerd[1841]: time="2025-11-08T00:26:11.114655549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.363601009s" Nov 8 00:26:11.114804 containerd[1841]: time="2025-11-08T00:26:11.114693350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:26:11.115500 containerd[1841]: time="2025-11-08T00:26:11.115372357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:26:12.297795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939806344.mount: Deactivated successfully. Nov 8 00:26:12.821682 containerd[1841]: time="2025-11-08T00:26:12.821633570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:12.823589 containerd[1841]: time="2025-11-08T00:26:12.823541291Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 8 00:26:12.827296 containerd[1841]: time="2025-11-08T00:26:12.826370222Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:12.830402 containerd[1841]: time="2025-11-08T00:26:12.829743459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:12.830402 containerd[1841]: time="2025-11-08T00:26:12.830262365Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.714852807s" Nov 8 00:26:12.830402 containerd[1841]: time="2025-11-08T00:26:12.830295265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:26:12.831014 containerd[1841]: time="2025-11-08T00:26:12.830987372Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:26:13.417500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619692850.mount: Deactivated successfully. Nov 8 00:26:13.518879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:26:13.525902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:13.639718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:13.644045 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:26:14.284004 kubelet[2734]: E1108 00:26:14.283950 2734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:26:14.286453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:26:14.286798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:26:15.481708 containerd[1841]: time="2025-11-08T00:26:15.481658188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.483985 containerd[1841]: time="2025-11-08T00:26:15.483773111Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 8 00:26:15.486593 containerd[1841]: time="2025-11-08T00:26:15.486520041Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.491579 containerd[1841]: time="2025-11-08T00:26:15.491544796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.492789 containerd[1841]: time="2025-11-08T00:26:15.492623408Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.661605735s" Nov 8 00:26:15.492789 containerd[1841]: time="2025-11-08T00:26:15.492660108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:26:15.493511 containerd[1841]: time="2025-11-08T00:26:15.493276115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:26:16.053085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814449190.mount: Deactivated successfully. Nov 8 00:26:16.069745 containerd[1841]: time="2025-11-08T00:26:16.069687603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:16.072110 containerd[1841]: time="2025-11-08T00:26:16.072046928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 8 00:26:16.074772 containerd[1841]: time="2025-11-08T00:26:16.074720058Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:16.078324 containerd[1841]: time="2025-11-08T00:26:16.078259496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:16.079644 containerd[1841]: time="2025-11-08T00:26:16.078965404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 585.657189ms" Nov 8 00:26:16.079644 containerd[1841]: time="2025-11-08T00:26:16.079003704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:26:16.079644 containerd[1841]: time="2025-11-08T00:26:16.079529110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:26:16.661025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425847414.mount: Deactivated successfully. Nov 8 00:26:18.862080 containerd[1841]: time="2025-11-08T00:26:18.861962077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.865051 containerd[1841]: time="2025-11-08T00:26:18.864913411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 8 00:26:18.868300 containerd[1841]: time="2025-11-08T00:26:18.868236749Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.872937 containerd[1841]: time="2025-11-08T00:26:18.872800601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.874633 containerd[1841]: time="2025-11-08T00:26:18.874277518Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.794694407s" Nov 8 00:26:18.874633 containerd[1841]: time="2025-11-08T00:26:18.874314019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:26:21.376806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:21.382825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:21.422279 systemd[1]: Reloading requested from client PID 2873 ('systemctl') (unit session-9.scope)... Nov 8 00:26:21.422303 systemd[1]: Reloading... Nov 8 00:26:21.563561 zram_generator::config[2920]: No configuration found. Nov 8 00:26:21.680875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:21.759577 systemd[1]: Reloading finished in 336 ms. Nov 8 00:26:21.808683 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:26:21.808791 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:26:21.809174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:21.816307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:22.179807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:22.184064 (kubelet)[2995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:26:22.221353 kubelet[2995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:26:22.221353 kubelet[2995]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:26:22.221353 kubelet[2995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:26:22.221353 kubelet[2995]: I1108 00:26:22.220061 2995 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:26:22.969121 kubelet[2995]: I1108 00:26:22.969059 2995 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:26:22.969121 kubelet[2995]: I1108 00:26:22.969101 2995 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:26:22.969989 kubelet[2995]: I1108 00:26:22.969627 2995 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:26:23.036566 kubelet[2995]: E1108 00:26:23.036042 2995 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:23.037363 kubelet[2995]: I1108 00:26:23.037229 2995 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:26:23.047682 kubelet[2995]: E1108 00:26:23.047650 2995 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:26:23.047682 kubelet[2995]: I1108 00:26:23.047683 2995 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:26:23.051278 kubelet[2995]: I1108 00:26:23.051247 2995 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:26:23.051724 kubelet[2995]: I1108 00:26:23.051687 2995 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:26:23.051906 kubelet[2995]: I1108 00:26:23.051720 2995 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-75d3e74165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:26:23.052051 kubelet[2995]: I1108 00:26:23.051916 2995 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:26:23.052051 kubelet[2995]: I1108 00:26:23.051929 2995 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:26:23.052131 kubelet[2995]: I1108 00:26:23.052064 2995 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:23.055182 kubelet[2995]: I1108 00:26:23.055159 2995 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:26:23.056569 kubelet[2995]: I1108 00:26:23.055191 2995 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:26:23.056569 kubelet[2995]: I1108 00:26:23.055213 2995 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:26:23.056569 kubelet[2995]: I1108 00:26:23.055225 2995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:26:23.063108 kubelet[2995]: W1108 00:26:23.063065 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:23.063452 kubelet[2995]: E1108 00:26:23.063253 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:23.063452 kubelet[2995]: I1108 00:26:23.063388 2995 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:26:23.064052 kubelet[2995]: I1108 00:26:23.064026 2995 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:26:23.065582 kubelet[2995]: W1108 00:26:23.065419 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-75d3e74165&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:23.065582 kubelet[2995]: E1108 00:26:23.065479 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-75d3e74165&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:23.065708 kubelet[2995]: W1108 00:26:23.065589 2995 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:26:23.067381 kubelet[2995]: I1108 00:26:23.067359 2995 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:26:23.067455 kubelet[2995]: I1108 00:26:23.067407 2995 server.go:1287] "Started kubelet" Nov 8 00:26:23.068562 kubelet[2995]: I1108 00:26:23.067549 2995 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:26:23.068562 kubelet[2995]: I1108 00:26:23.068302 2995 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:26:23.070812 kubelet[2995]: I1108 00:26:23.070401 2995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:26:23.073946 kubelet[2995]: I1108 00:26:23.073382 2995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:26:23.073946 kubelet[2995]: I1108 00:26:23.073656 2995 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:26:23.075195 kubelet[2995]: E1108 00:26:23.073832 2995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-75d3e74165.1875e06bdd7b21c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-75d3e74165,UID:ci-4081.3.6-n-75d3e74165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-75d3e74165,},FirstTimestamp:2025-11-08 00:26:23.067374024 +0000 UTC m=+0.879317010,LastTimestamp:2025-11-08 00:26:23.067374024 +0000 UTC m=+0.879317010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-75d3e74165,}" Nov 8 00:26:23.077529 kubelet[2995]: I1108 00:26:23.077507 2995 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:26:23.078918 kubelet[2995]: E1108 00:26:23.078892 2995 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-75d3e74165\" not found" Nov 8 00:26:23.079063 kubelet[2995]: I1108 00:26:23.079052 2995 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:26:23.079431 kubelet[2995]: I1108 00:26:23.079415 2995 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:26:23.079589 kubelet[2995]: I1108 00:26:23.079579 2995 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:26:23.080102 kubelet[2995]: W1108 00:26:23.080065 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:23.080233 kubelet[2995]: E1108 00:26:23.080202 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:23.080506 kubelet[2995]: I1108 00:26:23.080486 2995 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:26:23.080712 kubelet[2995]: I1108 00:26:23.080690 2995 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:26:23.082586 kubelet[2995]: E1108 00:26:23.082569 2995 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:26:23.082875 kubelet[2995]: I1108 00:26:23.082860 2995 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:26:23.096429 kubelet[2995]: E1108 00:26:23.096389 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-75d3e74165?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="200ms" Nov 8 00:26:23.102442 kubelet[2995]: I1108 00:26:23.102329 2995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:26:23.103768 kubelet[2995]: I1108 00:26:23.103749 2995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:26:23.104004 kubelet[2995]: I1108 00:26:23.103842 2995 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:26:23.104004 kubelet[2995]: I1108 00:26:23.103878 2995 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:26:23.104004 kubelet[2995]: I1108 00:26:23.103890 2995 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:26:23.104004 kubelet[2995]: E1108 00:26:23.103948 2995 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:26:23.111649 kubelet[2995]: W1108 00:26:23.111600 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:23.111745 kubelet[2995]: E1108 00:26:23.111722 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:23.139328 kubelet[2995]: I1108 00:26:23.139080 2995 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:26:23.139328 kubelet[2995]: I1108 00:26:23.139093 2995 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:26:23.139328 kubelet[2995]: I1108 00:26:23.139108 2995 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:23.144982 kubelet[2995]: I1108 00:26:23.144960 2995 policy_none.go:49] "None policy: Start" Nov 8 00:26:23.144982 kubelet[2995]: I1108 00:26:23.144982 2995 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:26:23.145121 kubelet[2995]: I1108 00:26:23.144995 2995 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:26:23.161151 kubelet[2995]: I1108 00:26:23.161125 2995 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:26:23.161320 kubelet[2995]: I1108 00:26:23.161301 2995 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:26:23.161383 kubelet[2995]: I1108 00:26:23.161317 2995 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:26:23.162477 kubelet[2995]: I1108 00:26:23.162451 2995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:26:23.165228 kubelet[2995]: E1108 00:26:23.165209 2995 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:26:23.165315 kubelet[2995]: E1108 00:26:23.165258 2995 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-75d3e74165\" not found" Nov 8 00:26:23.209891 kubelet[2995]: E1108 00:26:23.209844 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.215204 kubelet[2995]: E1108 00:26:23.215005 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.216355 kubelet[2995]: E1108 00:26:23.216336 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.262873 kubelet[2995]: I1108 00:26:23.262784 2995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.263337 kubelet[2995]: E1108 00:26:23.263209 2995 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.281269 kubelet[2995]: I1108 00:26:23.281161 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.281512 kubelet[2995]: I1108 00:26:23.281276 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.281512 kubelet[2995]: I1108 00:26:23.281358 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.281512 kubelet[2995]: I1108 00:26:23.281435 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fa78c798c1263d08414b6055e6b0f10-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-75d3e74165\" (UID: \"3fa78c798c1263d08414b6055e6b0f10\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.282014 kubelet[2995]: I1108 00:26:23.281578 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.282014 kubelet[2995]: I1108 00:26:23.281670 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.282014 kubelet[2995]: I1108 00:26:23.281740 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.282014 kubelet[2995]: I1108 00:26:23.281809 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.282014 kubelet[2995]: I1108 00:26:23.281882 2995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.297570 kubelet[2995]: E1108 00:26:23.297513 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-75d3e74165?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="400ms" Nov 8 00:26:23.466114 kubelet[2995]: I1108 00:26:23.466079 2995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.466489 kubelet[2995]: E1108 00:26:23.466457 2995 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.512196 containerd[1841]: time="2025-11-08T00:26:23.512145538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-75d3e74165,Uid:3548641ff3953b83bdc7e82f2d0beb47,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:23.516906 containerd[1841]: time="2025-11-08T00:26:23.516609289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-75d3e74165,Uid:078acf0dad2f35a32573416b60859c60,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:23.517774 containerd[1841]: time="2025-11-08T00:26:23.517719502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-75d3e74165,Uid:3fa78c798c1263d08414b6055e6b0f10,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:23.698671 kubelet[2995]: E1108 00:26:23.698628 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-75d3e74165?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="800ms" Nov 8 00:26:23.868846 kubelet[2995]: I1108 00:26:23.868815 2995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.869165 kubelet[2995]: E1108 00:26:23.869133 2995 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:23.985891 kubelet[2995]: W1108 00:26:23.985827 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:23.986040 kubelet[2995]: E1108 00:26:23.985900 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:24.042986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556632095.mount: Deactivated successfully. Nov 8 00:26:24.065725 containerd[1841]: time="2025-11-08T00:26:24.065676401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:24.068192 containerd[1841]: time="2025-11-08T00:26:24.068146830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 8 00:26:24.071150 containerd[1841]: time="2025-11-08T00:26:24.071117364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:24.073800 containerd[1841]: time="2025-11-08T00:26:24.073766594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:24.076464 containerd[1841]: time="2025-11-08T00:26:24.076415325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:26:24.080946 containerd[1841]: time="2025-11-08T00:26:24.080901376Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:24.083157 containerd[1841]: time="2025-11-08T00:26:24.083083801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:26:24.087129 containerd[1841]: time="2025-11-08T00:26:24.087079447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:26:24.088110 containerd[1841]: time="2025-11-08T00:26:24.087833656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.794951ms" Nov 8 00:26:24.089249 containerd[1841]: time="2025-11-08T00:26:24.089213772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.973433ms" Nov 8 00:26:24.091318 containerd[1841]: time="2025-11-08T00:26:24.091285396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 574.616806ms" Nov 8 00:26:24.199499 kubelet[2995]: W1108 00:26:24.199371 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-75d3e74165&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:24.199499 kubelet[2995]: E1108 00:26:24.199440 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-75d3e74165&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:24.421253 containerd[1841]: time="2025-11-08T00:26:24.420825984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:24.421253 containerd[1841]: time="2025-11-08T00:26:24.420913485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:24.421253 containerd[1841]: time="2025-11-08T00:26:24.420929386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.421253 containerd[1841]: time="2025-11-08T00:26:24.421037387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.425438 containerd[1841]: time="2025-11-08T00:26:24.425353436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:24.425438 containerd[1841]: time="2025-11-08T00:26:24.425400037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:24.425438 containerd[1841]: time="2025-11-08T00:26:24.425415437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.427098 containerd[1841]: time="2025-11-08T00:26:24.426914254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.427651 containerd[1841]: time="2025-11-08T00:26:24.426330648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:24.427651 containerd[1841]: time="2025-11-08T00:26:24.427376160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:24.427651 containerd[1841]: time="2025-11-08T00:26:24.427395160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.427651 containerd[1841]: time="2025-11-08T00:26:24.427482261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.503756 kubelet[2995]: E1108 00:26:24.503650 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-75d3e74165?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="1.6s" Nov 8 00:26:24.520249 containerd[1841]: time="2025-11-08T00:26:24.513725952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-75d3e74165,Uid:3548641ff3953b83bdc7e82f2d0beb47,Namespace:kube-system,Attempt:0,} returns sandbox id \"d554f2f85f52f73fa35337ed8931a0823ac1df09ea397d00cff95c476bf8fde6\"" Nov 8 00:26:24.528033 containerd[1841]: time="2025-11-08T00:26:24.527996916Z" level=info msg="CreateContainer within sandbox \"d554f2f85f52f73fa35337ed8931a0823ac1df09ea397d00cff95c476bf8fde6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:26:24.546785 containerd[1841]: time="2025-11-08T00:26:24.546750832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-75d3e74165,Uid:3fa78c798c1263d08414b6055e6b0f10,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c39930a1b694d154ed42e35afb1be0b4f301fbca0c727d1095f0c9856b2b817\"" Nov 8 00:26:24.551125 containerd[1841]: time="2025-11-08T00:26:24.551096082Z" level=info msg="CreateContainer within sandbox \"6c39930a1b694d154ed42e35afb1be0b4f301fbca0c727d1095f0c9856b2b817\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:26:24.553796 containerd[1841]: time="2025-11-08T00:26:24.553776213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-75d3e74165,Uid:078acf0dad2f35a32573416b60859c60,Namespace:kube-system,Attempt:0,} returns sandbox id \"e433546204404a007181a45156664b6aed7b04f8126c4029cfea5df9bf79b533\"" Nov 8 00:26:24.559546 containerd[1841]: time="2025-11-08T00:26:24.559500879Z" level=info msg="CreateContainer within sandbox \"e433546204404a007181a45156664b6aed7b04f8126c4029cfea5df9bf79b533\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:26:24.599086 containerd[1841]: time="2025-11-08T00:26:24.598820531Z" level=info msg="CreateContainer within sandbox \"d554f2f85f52f73fa35337ed8931a0823ac1df09ea397d00cff95c476bf8fde6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286837bfb6d917c4b9f8bb03e57303ca3f40875570c05d81fe4ff56b6140cfb9\"" Nov 8 00:26:24.599921 containerd[1841]: time="2025-11-08T00:26:24.599901543Z" level=info msg="StartContainer for \"286837bfb6d917c4b9f8bb03e57303ca3f40875570c05d81fe4ff56b6140cfb9\"" Nov 8 00:26:24.613445 containerd[1841]: time="2025-11-08T00:26:24.613411298Z" level=info msg="CreateContainer within sandbox \"6c39930a1b694d154ed42e35afb1be0b4f301fbca0c727d1095f0c9856b2b817\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3a1aea96ee28e680ae7d4e9c917edd8e3c887f3de23df79d9b2a2bc49f059fb\"" Nov 8 00:26:24.614593 containerd[1841]: time="2025-11-08T00:26:24.614161307Z" level=info msg="StartContainer for \"c3a1aea96ee28e680ae7d4e9c917edd8e3c887f3de23df79d9b2a2bc49f059fb\"" Nov 8 00:26:24.623177 containerd[1841]: time="2025-11-08T00:26:24.623151310Z" level=info msg="CreateContainer within sandbox \"e433546204404a007181a45156664b6aed7b04f8126c4029cfea5df9bf79b533\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66c0b87e38e7ccdd4ac22a107cc5f0fd45a50c70c469446511112a8e26a25a8e\"" Nov 8 00:26:24.624217 containerd[1841]: time="2025-11-08T00:26:24.624189822Z" level=info msg="StartContainer for \"66c0b87e38e7ccdd4ac22a107cc5f0fd45a50c70c469446511112a8e26a25a8e\"" Nov 8 00:26:24.634181 kubelet[2995]: W1108 00:26:24.633527 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:24.634181 kubelet[2995]: E1108 00:26:24.634142 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:24.656787 kubelet[2995]: W1108 00:26:24.656730 2995 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Nov 8 00:26:24.658549 kubelet[2995]: E1108 00:26:24.656909 2995 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.16:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:26:24.675738 kubelet[2995]: I1108 00:26:24.675591 2995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:24.676282 kubelet[2995]: E1108 00:26:24.676222 2995 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:24.727772 containerd[1841]: time="2025-11-08T00:26:24.726372997Z" level=info msg="StartContainer for \"286837bfb6d917c4b9f8bb03e57303ca3f40875570c05d81fe4ff56b6140cfb9\" returns successfully" Nov 8 00:26:24.762638 containerd[1841]: time="2025-11-08T00:26:24.759914783Z" level=info msg="StartContainer for \"c3a1aea96ee28e680ae7d4e9c917edd8e3c887f3de23df79d9b2a2bc49f059fb\" returns successfully" Nov 8 00:26:24.771703 containerd[1841]: time="2025-11-08T00:26:24.771654718Z" level=info msg="StartContainer for \"66c0b87e38e7ccdd4ac22a107cc5f0fd45a50c70c469446511112a8e26a25a8e\" returns successfully" Nov 8 00:26:25.131973 kubelet[2995]: E1108 00:26:25.131943 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:25.139137 kubelet[2995]: E1108 00:26:25.138921 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:25.142559 kubelet[2995]: E1108 00:26:25.141575 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.147572 kubelet[2995]: E1108 00:26:26.146288 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.150244 kubelet[2995]: E1108 00:26:26.148365 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.150244 kubelet[2995]: E1108 00:26:26.149819 2995 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.280569 kubelet[2995]: I1108 00:26:26.279824 2995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.637556 kubelet[2995]: E1108 00:26:26.637254 2995 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-75d3e74165\" not found" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.725677 kubelet[2995]: I1108 00:26:26.723668 2995 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.781325 kubelet[2995]: I1108 00:26:26.781049 2995 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.803561 kubelet[2995]: E1108 00:26:26.802730 2995 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.803561 kubelet[2995]: I1108 00:26:26.802770 2995 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.810017 kubelet[2995]: E1108 00:26:26.809828 2995 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-75d3e74165\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.810017 kubelet[2995]: I1108 00:26:26.809862 2995 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:26.813695 kubelet[2995]: E1108 00:26:26.813669 2995 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:27.062964 kubelet[2995]: I1108 00:26:27.062907 2995 apiserver.go:52] "Watching apiserver" Nov 8 00:26:27.080250 kubelet[2995]: I1108 00:26:27.080213 2995 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:26:27.143387 kubelet[2995]: I1108 00:26:27.143351 2995 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:27.145671 kubelet[2995]: E1108 00:26:27.145630 2995 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:28.955057 systemd[1]: Reloading requested from client PID 3264 ('systemctl') (unit session-9.scope)... Nov 8 00:26:28.955072 systemd[1]: Reloading... Nov 8 00:26:29.068570 zram_generator::config[3310]: No configuration found. Nov 8 00:26:29.194197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:29.285428 systemd[1]: Reloading finished in 329 ms. Nov 8 00:26:29.327561 kubelet[2995]: I1108 00:26:29.327349 2995 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:26:29.327617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:29.335675 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:26:29.336040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:29.344775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:26:29.462749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:26:29.478154 (kubelet)[3381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:26:29.521560 kubelet[3381]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:26:29.521560 kubelet[3381]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:26:29.521560 kubelet[3381]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:26:29.521560 kubelet[3381]: I1108 00:26:29.520725 3381 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:26:29.526633 kubelet[3381]: I1108 00:26:29.526593 3381 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:26:29.526633 kubelet[3381]: I1108 00:26:29.526615 3381 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:26:29.526888 kubelet[3381]: I1108 00:26:29.526873 3381 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:26:29.528021 kubelet[3381]: I1108 00:26:29.527996 3381 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:26:29.530604 kubelet[3381]: I1108 00:26:29.530020 3381 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:26:29.532922 kubelet[3381]: E1108 00:26:29.532891 3381 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:26:29.532922 kubelet[3381]: I1108 00:26:29.532922 3381 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:26:30.062153 kubelet[3381]: I1108 00:26:30.062107 3381 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:26:30.120620 kubelet[3381]: I1108 00:26:30.063207 3381 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:26:30.120620 kubelet[3381]: I1108 00:26:30.063247 3381 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-75d3e74165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:26:30.120620 kubelet[3381]: I1108 00:26:30.063468 3381 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:26:30.120620 kubelet[3381]: I1108 00:26:30.063502 3381 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.064223 3381 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.065650 3381 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.065689 3381 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.065720 3381 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.065734 3381 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.069020 3381 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.069514 3381 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.070007 3381 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.070042 3381 server.go:1287] "Started kubelet" Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.087577 3381 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.090096 3381 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:26:30.120922 kubelet[3381]: I1108 00:26:30.091510 3381 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:26:30.120922 kubelet[3381]: E1108 00:26:30.096209 3381 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:26:30.122158 kubelet[3381]: I1108 00:26:30.120933 3381 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:26:30.122158 kubelet[3381]: I1108 00:26:30.121525 3381 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:26:30.122158 kubelet[3381]: I1108 00:26:30.121587 3381 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:26:30.125844 kubelet[3381]: I1108 00:26:30.125720 3381 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:26:30.126625 kubelet[3381]: I1108 00:26:30.126493 3381 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:26:30.126770 kubelet[3381]: I1108 00:26:30.126757 3381 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:26:30.128014 kubelet[3381]: I1108 00:26:30.127444 3381 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:26:30.128014 kubelet[3381]: I1108 00:26:30.127578 3381 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:26:30.132564 kubelet[3381]: I1108 00:26:30.131052 3381 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:26:30.145134 kubelet[3381]: I1108 00:26:30.145079 3381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:26:30.147944 kubelet[3381]: I1108 00:26:30.147697 3381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:26:30.147944 kubelet[3381]: I1108 00:26:30.147731 3381 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:26:30.147944 kubelet[3381]: I1108 00:26:30.147752 3381 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:26:30.147944 kubelet[3381]: I1108 00:26:30.147761 3381 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:26:30.147944 kubelet[3381]: E1108 00:26:30.147807 3381 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:26:30.221885 kubelet[3381]: I1108 00:26:30.221858 3381 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:26:30.222056 kubelet[3381]: I1108 00:26:30.222045 3381 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:26:30.222124 kubelet[3381]: I1108 00:26:30.222117 3381 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:26:30.222382 kubelet[3381]: I1108 00:26:30.222366 3381 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:26:30.222615 kubelet[3381]: I1108 00:26:30.222529 3381 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:26:30.222697 kubelet[3381]: I1108 00:26:30.222691 3381 policy_none.go:49] "None policy: Start" Nov 8 00:26:30.222770 kubelet[3381]: I1108 00:26:30.222763 3381 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:26:30.222830 kubelet[3381]: I1108 00:26:30.222823 3381 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:26:30.223026 kubelet[3381]: I1108 00:26:30.223017 3381 state_mem.go:75] "Updated machine memory state" Nov 8 00:26:30.224426 kubelet[3381]: I1108 00:26:30.224412 3381 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:26:30.224786 kubelet[3381]: I1108 00:26:30.224764 3381 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:26:30.224866 kubelet[3381]: I1108 00:26:30.224786 3381 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:26:30.225825 kubelet[3381]: I1108 00:26:30.225803 3381 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:26:30.231932 kubelet[3381]: E1108 00:26:30.231917 3381 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:26:30.249448 kubelet[3381]: I1108 00:26:30.249389 3381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.251482 kubelet[3381]: I1108 00:26:30.251458 3381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.254131 kubelet[3381]: I1108 00:26:30.251769 3381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.268718 kubelet[3381]: W1108 00:26:30.268379 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:26:30.273064 kubelet[3381]: W1108 00:26:30.272678 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:26:30.273749 kubelet[3381]: W1108 00:26:30.273615 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:26:30.329309 kubelet[3381]: I1108 00:26:30.328928 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fa78c798c1263d08414b6055e6b0f10-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-75d3e74165\" (UID: \"3fa78c798c1263d08414b6055e6b0f10\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.329309 kubelet[3381]: I1108 00:26:30.328976 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.329309 kubelet[3381]: I1108 00:26:30.329007 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.329309 kubelet[3381]: I1108 00:26:30.329036 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.329309 kubelet[3381]: I1108 00:26:30.329062 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.331141 kubelet[3381]: I1108 00:26:30.329087 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.331141 kubelet[3381]: I1108 00:26:30.329111 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.331141 kubelet[3381]: I1108 00:26:30.329133 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3548641ff3953b83bdc7e82f2d0beb47-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-75d3e74165\" (UID: \"3548641ff3953b83bdc7e82f2d0beb47\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.331141 kubelet[3381]: I1108 00:26:30.329156 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/078acf0dad2f35a32573416b60859c60-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-75d3e74165\" (UID: \"078acf0dad2f35a32573416b60859c60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.342145 kubelet[3381]: I1108 00:26:30.340406 3381 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.350498 kubelet[3381]: I1108 00:26:30.350468 3381 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:30.350612 kubelet[3381]: I1108 00:26:30.350559 3381 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-75d3e74165" Nov 8 00:26:31.068335 kubelet[3381]: I1108 00:26:31.067985 3381 apiserver.go:52] "Watching apiserver" Nov 8 00:26:31.126912 kubelet[3381]: I1108 00:26:31.126866 3381 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:26:31.178846 kubelet[3381]: I1108 00:26:31.178769 3381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:31.188380 kubelet[3381]: W1108 00:26:31.188122 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:26:31.188380 kubelet[3381]: E1108 00:26:31.188185 3381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-75d3e74165\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" Nov 8 00:26:31.266951 kubelet[3381]: I1108 00:26:31.266856 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-75d3e74165" podStartSLOduration=1.266835234 podStartE2EDuration="1.266835234s" podCreationTimestamp="2025-11-08 00:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:31.239786188 +0000 UTC m=+1.756030032" watchObservedRunningTime="2025-11-08 00:26:31.266835234 +0000 UTC m=+1.783079078" Nov 8 00:26:31.290362 kubelet[3381]: I1108 00:26:31.290296 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-75d3e74165" podStartSLOduration=1.290277533 podStartE2EDuration="1.290277533s" podCreationTimestamp="2025-11-08 00:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:31.29003353 +0000 UTC m=+1.806277474" watchObservedRunningTime="2025-11-08 00:26:31.290277533 +0000 UTC m=+1.806521377" Nov 8 00:26:31.290592 kubelet[3381]: I1108 00:26:31.290392 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-75d3e74165" podStartSLOduration=1.290384535 podStartE2EDuration="1.290384535s" podCreationTimestamp="2025-11-08 00:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:31.268697058 +0000 UTC m=+1.784940902" watchObservedRunningTime="2025-11-08 00:26:31.290384535 +0000 UTC m=+1.806628379" Nov 8 00:26:33.463618 kubelet[3381]: I1108 00:26:33.463495 3381 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:26:33.464446 containerd[1841]: time="2025-11-08T00:26:33.464393505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:26:33.465283 kubelet[3381]: I1108 00:26:33.464663 3381 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:26:34.354183 kubelet[3381]: I1108 00:26:34.354142 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/281d14f8-822f-4fc2-8b51-97962c83cded-lib-modules\") pod \"kube-proxy-9ktg5\" (UID: \"281d14f8-822f-4fc2-8b51-97962c83cded\") " pod="kube-system/kube-proxy-9ktg5" Nov 8 00:26:34.354357 kubelet[3381]: I1108 00:26:34.354195 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/281d14f8-822f-4fc2-8b51-97962c83cded-kube-proxy\") pod \"kube-proxy-9ktg5\" (UID: \"281d14f8-822f-4fc2-8b51-97962c83cded\") " pod="kube-system/kube-proxy-9ktg5" Nov 8 00:26:34.354357 kubelet[3381]: I1108 00:26:34.354241 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/281d14f8-822f-4fc2-8b51-97962c83cded-xtables-lock\") pod \"kube-proxy-9ktg5\" (UID: \"281d14f8-822f-4fc2-8b51-97962c83cded\") " pod="kube-system/kube-proxy-9ktg5" Nov 8 00:26:34.354357 kubelet[3381]: I1108 00:26:34.354263 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdmq\" (UniqueName: \"kubernetes.io/projected/281d14f8-822f-4fc2-8b51-97962c83cded-kube-api-access-mcdmq\") pod \"kube-proxy-9ktg5\" (UID: \"281d14f8-822f-4fc2-8b51-97962c83cded\") " pod="kube-system/kube-proxy-9ktg5" Nov 8 00:26:34.631681 containerd[1841]: time="2025-11-08T00:26:34.631507514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ktg5,Uid:281d14f8-822f-4fc2-8b51-97962c83cded,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:34.656626 kubelet[3381]: I1108 00:26:34.656543 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57a0db37-4b46-4609-aa68-4bb818b4e9d0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ssb2l\" (UID: \"57a0db37-4b46-4609-aa68-4bb818b4e9d0\") " pod="tigera-operator/tigera-operator-7dcd859c48-ssb2l" Nov 8 00:26:34.657779 kubelet[3381]: I1108 00:26:34.657691 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl7q\" (UniqueName: \"kubernetes.io/projected/57a0db37-4b46-4609-aa68-4bb818b4e9d0-kube-api-access-lbl7q\") pod \"tigera-operator-7dcd859c48-ssb2l\" (UID: \"57a0db37-4b46-4609-aa68-4bb818b4e9d0\") " pod="tigera-operator/tigera-operator-7dcd859c48-ssb2l" Nov 8 00:26:34.671985 containerd[1841]: time="2025-11-08T00:26:34.671701727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:34.672225 containerd[1841]: time="2025-11-08T00:26:34.671755528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:34.672225 containerd[1841]: time="2025-11-08T00:26:34.671806629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:34.672225 containerd[1841]: time="2025-11-08T00:26:34.671936730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:34.712938 containerd[1841]: time="2025-11-08T00:26:34.712898854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ktg5,Uid:281d14f8-822f-4fc2-8b51-97962c83cded,Namespace:kube-system,Attempt:0,} returns sandbox id \"747496b6c2ceb710499a1545b23e55abe03d14b7e3df706c5ad807b3b0acb91e\"" Nov 8 00:26:34.715563 containerd[1841]: time="2025-11-08T00:26:34.715509787Z" level=info msg="CreateContainer within sandbox \"747496b6c2ceb710499a1545b23e55abe03d14b7e3df706c5ad807b3b0acb91e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:26:34.858664 containerd[1841]: time="2025-11-08T00:26:34.858311111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ssb2l,Uid:57a0db37-4b46-4609-aa68-4bb818b4e9d0,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:26:35.111912 containerd[1841]: time="2025-11-08T00:26:35.111856715Z" level=info msg="CreateContainer within sandbox \"747496b6c2ceb710499a1545b23e55abe03d14b7e3df706c5ad807b3b0acb91e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9bc53e638b6480102ab2c81fa7a50cdb2d37faf868a02f4fd7d2f0055a347628\"" Nov 8 00:26:35.113111 containerd[1841]: time="2025-11-08T00:26:35.112972527Z" level=info msg="StartContainer for \"9bc53e638b6480102ab2c81fa7a50cdb2d37faf868a02f4fd7d2f0055a347628\"" Nov 8 00:26:35.177196 containerd[1841]: time="2025-11-08T00:26:35.177154062Z" level=info msg="StartContainer for \"9bc53e638b6480102ab2c81fa7a50cdb2d37faf868a02f4fd7d2f0055a347628\" returns successfully" Nov 8 00:26:35.200091 kubelet[3381]: I1108 00:26:35.200030 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9ktg5" podStartSLOduration=1.200010024 podStartE2EDuration="1.200010024s" podCreationTimestamp="2025-11-08 00:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:35.199598719 +0000 UTC m=+5.715842663" watchObservedRunningTime="2025-11-08 00:26:35.200010024 +0000 UTC m=+5.716253868" Nov 8 00:26:35.252870 containerd[1841]: time="2025-11-08T00:26:35.242525711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:35.252870 containerd[1841]: time="2025-11-08T00:26:35.242620712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:35.252870 containerd[1841]: time="2025-11-08T00:26:35.242655812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:35.252870 containerd[1841]: time="2025-11-08T00:26:35.243399621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:35.321281 containerd[1841]: time="2025-11-08T00:26:35.320775607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ssb2l,Uid:57a0db37-4b46-4609-aa68-4bb818b4e9d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd22a3d85733c20599b65958c33e0cd4be237f225694f6f1ae9b2b2c447d0427\"" Nov 8 00:26:35.323553 containerd[1841]: time="2025-11-08T00:26:35.323414637Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:26:46.700564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002912254.mount: Deactivated successfully. Nov 8 00:26:50.224501 containerd[1841]: time="2025-11-08T00:26:50.224452253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:50.226620 containerd[1841]: time="2025-11-08T00:26:50.226457175Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:26:50.229648 containerd[1841]: time="2025-11-08T00:26:50.229589209Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:50.233569 containerd[1841]: time="2025-11-08T00:26:50.233503252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:50.234688 containerd[1841]: time="2025-11-08T00:26:50.234197260Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 14.910744023s" Nov 8 00:26:50.234688 containerd[1841]: time="2025-11-08T00:26:50.234235460Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:26:50.236700 containerd[1841]: time="2025-11-08T00:26:50.236499485Z" level=info msg="CreateContainer within sandbox \"dd22a3d85733c20599b65958c33e0cd4be237f225694f6f1ae9b2b2c447d0427\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:26:50.265406 containerd[1841]: time="2025-11-08T00:26:50.265212600Z" level=info msg="CreateContainer within sandbox \"dd22a3d85733c20599b65958c33e0cd4be237f225694f6f1ae9b2b2c447d0427\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c90a4ba615760823ab89aa621ddc2e79fb06df100e17af7e4c9c5b7a0e27bb10\"" Nov 8 00:26:50.266770 containerd[1841]: time="2025-11-08T00:26:50.266737117Z" level=info msg="StartContainer for \"c90a4ba615760823ab89aa621ddc2e79fb06df100e17af7e4c9c5b7a0e27bb10\"" Nov 8 00:26:50.326059 containerd[1841]: time="2025-11-08T00:26:50.325456260Z" level=info msg="StartContainer for \"c90a4ba615760823ab89aa621ddc2e79fb06df100e17af7e4c9c5b7a0e27bb10\" returns successfully" Nov 8 00:26:56.800757 sudo[2469]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:56.907759 sshd[2465]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:56.914006 systemd[1]: sshd@6-10.200.8.16:22-10.200.16.10:48618.service: Deactivated successfully. Nov 8 00:26:56.921429 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:26:56.925869 systemd-logind[1801]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:26:56.933765 systemd-logind[1801]: Removed session 9. Nov 8 00:27:02.767663 kubelet[3381]: I1108 00:27:02.767085 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ssb2l" podStartSLOduration=13.854002468000001 podStartE2EDuration="28.767063917s" podCreationTimestamp="2025-11-08 00:26:34 +0000 UTC" firstStartedPulling="2025-11-08 00:26:35.322195223 +0000 UTC m=+5.838439067" lastFinishedPulling="2025-11-08 00:26:50.235256572 +0000 UTC m=+20.751500516" observedRunningTime="2025-11-08 00:26:51.228741818 +0000 UTC m=+21.744985662" watchObservedRunningTime="2025-11-08 00:27:02.767063917 +0000 UTC m=+33.283307761" Nov 8 00:27:02.841732 kubelet[3381]: I1108 00:27:02.841659 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aca93d1-7412-4b1a-b135-c7557c7f84aa-tigera-ca-bundle\") pod \"calico-typha-6cbb84f6b6-4jdzx\" (UID: \"2aca93d1-7412-4b1a-b135-c7557c7f84aa\") " pod="calico-system/calico-typha-6cbb84f6b6-4jdzx" Nov 8 00:27:02.841732 kubelet[3381]: I1108 00:27:02.841707 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2aca93d1-7412-4b1a-b135-c7557c7f84aa-typha-certs\") pod \"calico-typha-6cbb84f6b6-4jdzx\" (UID: \"2aca93d1-7412-4b1a-b135-c7557c7f84aa\") " pod="calico-system/calico-typha-6cbb84f6b6-4jdzx" Nov 8 00:27:02.841732 kubelet[3381]: I1108 00:27:02.841737 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hddr\" (UniqueName: \"kubernetes.io/projected/2aca93d1-7412-4b1a-b135-c7557c7f84aa-kube-api-access-9hddr\") pod \"calico-typha-6cbb84f6b6-4jdzx\" (UID: \"2aca93d1-7412-4b1a-b135-c7557c7f84aa\") " pod="calico-system/calico-typha-6cbb84f6b6-4jdzx" Nov 8 00:27:03.043284 kubelet[3381]: I1108 00:27:03.043167 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-cni-bin-dir\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044028 kubelet[3381]: I1108 00:27:03.043425 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-cni-net-dir\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044028 kubelet[3381]: I1108 00:27:03.043478 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-lib-modules\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044028 kubelet[3381]: I1108 00:27:03.043508 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29ltc\" (UniqueName: \"kubernetes.io/projected/efff08da-06e9-4079-8b05-83afc4c12ee4-kube-api-access-29ltc\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044028 kubelet[3381]: I1108 00:27:03.043556 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-flexvol-driver-host\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044028 kubelet[3381]: I1108 00:27:03.043587 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-xtables-lock\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044318 kubelet[3381]: I1108 00:27:03.043616 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-policysync\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044318 kubelet[3381]: I1108 00:27:03.043648 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/efff08da-06e9-4079-8b05-83afc4c12ee4-node-certs\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044318 kubelet[3381]: I1108 00:27:03.043676 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efff08da-06e9-4079-8b05-83afc4c12ee4-tigera-ca-bundle\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044318 kubelet[3381]: I1108 00:27:03.043702 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-var-lib-calico\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044318 kubelet[3381]: I1108 00:27:03.043729 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-var-run-calico\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.044630 kubelet[3381]: I1108 00:27:03.043763 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/efff08da-06e9-4079-8b05-83afc4c12ee4-cni-log-dir\") pod \"calico-node-gbjxn\" (UID: \"efff08da-06e9-4079-8b05-83afc4c12ee4\") " pod="calico-system/calico-node-gbjxn" Nov 8 00:27:03.084249 containerd[1841]: time="2025-11-08T00:27:03.084170290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cbb84f6b6-4jdzx,Uid:2aca93d1-7412-4b1a-b135-c7557c7f84aa,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:03.159574 kubelet[3381]: E1108 00:27:03.152645 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.159574 kubelet[3381]: W1108 00:27:03.152669 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.159574 kubelet[3381]: E1108 00:27:03.152707 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.160727 kubelet[3381]: E1108 00:27:03.160040 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.160727 kubelet[3381]: W1108 00:27:03.160060 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.160727 kubelet[3381]: E1108 00:27:03.160080 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.162057 kubelet[3381]: E1108 00:27:03.161882 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.164311 kubelet[3381]: W1108 00:27:03.164289 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.164567 kubelet[3381]: E1108 00:27:03.164519 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.165055 kubelet[3381]: E1108 00:27:03.165035 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.165055 kubelet[3381]: W1108 00:27:03.165055 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.165280 kubelet[3381]: E1108 00:27:03.165074 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.167625 kubelet[3381]: E1108 00:27:03.165617 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.167625 kubelet[3381]: W1108 00:27:03.165633 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.167625 kubelet[3381]: E1108 00:27:03.165648 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.189747 kubelet[3381]: E1108 00:27:03.189409 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:03.211457 kubelet[3381]: E1108 00:27:03.211195 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.211457 kubelet[3381]: W1108 00:27:03.211215 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.211457 kubelet[3381]: E1108 00:27:03.211234 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.225838 containerd[1841]: time="2025-11-08T00:27:03.223252869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:03.225838 containerd[1841]: time="2025-11-08T00:27:03.223340170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:03.225838 containerd[1841]: time="2025-11-08T00:27:03.223362171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:03.230631 kubelet[3381]: E1108 00:27:03.228707 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.230631 kubelet[3381]: W1108 00:27:03.228726 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.230631 kubelet[3381]: E1108 00:27:03.228746 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.232558 kubelet[3381]: E1108 00:27:03.231465 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.232558 kubelet[3381]: W1108 00:27:03.231479 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.232558 kubelet[3381]: E1108 00:27:03.231494 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.238524 containerd[1841]: time="2025-11-08T00:27:03.234031384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:03.239450 kubelet[3381]: E1108 00:27:03.239421 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.239707 kubelet[3381]: W1108 00:27:03.239442 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.239707 kubelet[3381]: E1108 00:27:03.239477 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.239963 kubelet[3381]: E1108 00:27:03.239944 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.239963 kubelet[3381]: W1108 00:27:03.239962 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.240064 kubelet[3381]: E1108 00:27:03.239977 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.242776 kubelet[3381]: E1108 00:27:03.241108 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.242776 kubelet[3381]: W1108 00:27:03.241125 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.242776 kubelet[3381]: E1108 00:27:03.241139 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.242936 kubelet[3381]: E1108 00:27:03.242812 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.242936 kubelet[3381]: W1108 00:27:03.242827 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.242936 kubelet[3381]: E1108 00:27:03.242842 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.243076 kubelet[3381]: E1108 00:27:03.243069 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.243121 kubelet[3381]: W1108 00:27:03.243079 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.243121 kubelet[3381]: E1108 00:27:03.243092 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.245279 kubelet[3381]: E1108 00:27:03.244679 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.245279 kubelet[3381]: W1108 00:27:03.244694 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.245279 kubelet[3381]: E1108 00:27:03.244707 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.246898 kubelet[3381]: E1108 00:27:03.246685 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.246898 kubelet[3381]: W1108 00:27:03.246700 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.246898 kubelet[3381]: E1108 00:27:03.246715 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.246898 kubelet[3381]: E1108 00:27:03.246900 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.247109 kubelet[3381]: W1108 00:27:03.246912 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.247109 kubelet[3381]: E1108 00:27:03.246924 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.247109 kubelet[3381]: E1108 00:27:03.247098 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.247109 kubelet[3381]: W1108 00:27:03.247108 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.247281 kubelet[3381]: E1108 00:27:03.247120 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.249721 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.251986 kubelet[3381]: W1108 00:27:03.249739 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.249754 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.249996 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.251986 kubelet[3381]: W1108 00:27:03.250007 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.250020 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.250213 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.251986 kubelet[3381]: W1108 00:27:03.250223 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.250234 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.251986 kubelet[3381]: E1108 00:27:03.251313 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.252397 kubelet[3381]: W1108 00:27:03.251325 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.252397 kubelet[3381]: E1108 00:27:03.251338 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.252397 kubelet[3381]: E1108 00:27:03.251518 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.252397 kubelet[3381]: W1108 00:27:03.251526 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.252397 kubelet[3381]: E1108 00:27:03.251551 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.256556 kubelet[3381]: E1108 00:27:03.253462 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.256556 kubelet[3381]: W1108 00:27:03.253476 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.256556 kubelet[3381]: E1108 00:27:03.253489 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.256723 kubelet[3381]: E1108 00:27:03.256615 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.256723 kubelet[3381]: W1108 00:27:03.256628 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.256723 kubelet[3381]: E1108 00:27:03.256643 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.257010 kubelet[3381]: E1108 00:27:03.256985 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.257010 kubelet[3381]: W1108 00:27:03.257010 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.257117 kubelet[3381]: E1108 00:27:03.257024 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.258881 kubelet[3381]: E1108 00:27:03.258859 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.258881 kubelet[3381]: W1108 00:27:03.258878 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.259028 kubelet[3381]: E1108 00:27:03.258892 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.261663 kubelet[3381]: E1108 00:27:03.261644 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.261663 kubelet[3381]: W1108 00:27:03.261661 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.261784 kubelet[3381]: E1108 00:27:03.261676 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.261784 kubelet[3381]: I1108 00:27:03.261707 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a8e41043-2a66-4025-a79c-fc0f732e85fb-varrun\") pod \"csi-node-driver-wt8ss\" (UID: \"a8e41043-2a66-4025-a79c-fc0f732e85fb\") " pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:03.263633 kubelet[3381]: E1108 00:27:03.262024 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.263633 kubelet[3381]: W1108 00:27:03.262043 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.263633 kubelet[3381]: E1108 00:27:03.262152 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.263633 kubelet[3381]: I1108 00:27:03.262176 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8e41043-2a66-4025-a79c-fc0f732e85fb-registration-dir\") pod \"csi-node-driver-wt8ss\" (UID: \"a8e41043-2a66-4025-a79c-fc0f732e85fb\") " pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:03.264233 containerd[1841]: time="2025-11-08T00:27:03.264015503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbjxn,Uid:efff08da-06e9-4079-8b05-83afc4c12ee4,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:03.265806 kubelet[3381]: E1108 00:27:03.265703 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.265806 kubelet[3381]: W1108 00:27:03.265720 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.265806 kubelet[3381]: E1108 00:27:03.265736 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.265806 kubelet[3381]: I1108 00:27:03.265759 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e41043-2a66-4025-a79c-fc0f732e85fb-kubelet-dir\") pod \"csi-node-driver-wt8ss\" (UID: \"a8e41043-2a66-4025-a79c-fc0f732e85fb\") " pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:03.267156 kubelet[3381]: E1108 00:27:03.266083 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.267156 kubelet[3381]: W1108 00:27:03.266099 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.267156 kubelet[3381]: E1108 00:27:03.266772 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.267156 kubelet[3381]: I1108 00:27:03.266803 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8e41043-2a66-4025-a79c-fc0f732e85fb-socket-dir\") pod \"csi-node-driver-wt8ss\" (UID: \"a8e41043-2a66-4025-a79c-fc0f732e85fb\") " pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:03.267355 kubelet[3381]: E1108 00:27:03.267270 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.267355 kubelet[3381]: W1108 00:27:03.267282 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.268270 kubelet[3381]: E1108 00:27:03.267620 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.272551 kubelet[3381]: E1108 00:27:03.268696 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.272551 kubelet[3381]: W1108 00:27:03.268711 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.272551 kubelet[3381]: E1108 00:27:03.269629 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.272551 kubelet[3381]: E1108 00:27:03.270628 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.272551 kubelet[3381]: W1108 00:27:03.270642 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.272808 kubelet[3381]: E1108 00:27:03.272629 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.272808 kubelet[3381]: E1108 00:27:03.272801 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.272893 kubelet[3381]: W1108 00:27:03.272812 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.275555 kubelet[3381]: E1108 00:27:03.273983 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.275555 kubelet[3381]: E1108 00:27:03.274126 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.275555 kubelet[3381]: W1108 00:27:03.274147 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.275555 kubelet[3381]: E1108 00:27:03.274571 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.275555 kubelet[3381]: I1108 00:27:03.274600 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn49c\" (UniqueName: \"kubernetes.io/projected/a8e41043-2a66-4025-a79c-fc0f732e85fb-kube-api-access-wn49c\") pod \"csi-node-driver-wt8ss\" (UID: \"a8e41043-2a66-4025-a79c-fc0f732e85fb\") " pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:03.275820 kubelet[3381]: E1108 00:27:03.275613 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.275820 kubelet[3381]: W1108 00:27:03.275625 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.275820 kubelet[3381]: E1108 00:27:03.275709 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.276572 kubelet[3381]: E1108 00:27:03.276543 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.276572 kubelet[3381]: W1108 00:27:03.276568 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.276692 kubelet[3381]: E1108 00:27:03.276583 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.278465 kubelet[3381]: E1108 00:27:03.278447 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.278465 kubelet[3381]: W1108 00:27:03.278465 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.278624 kubelet[3381]: E1108 00:27:03.278482 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.280648 kubelet[3381]: E1108 00:27:03.279732 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.280648 kubelet[3381]: W1108 00:27:03.279747 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.280648 kubelet[3381]: E1108 00:27:03.279760 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.281510 kubelet[3381]: E1108 00:27:03.281169 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.281510 kubelet[3381]: W1108 00:27:03.281183 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.281510 kubelet[3381]: E1108 00:27:03.281197 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.282748 kubelet[3381]: E1108 00:27:03.282688 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.282748 kubelet[3381]: W1108 00:27:03.282703 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.282748 kubelet[3381]: E1108 00:27:03.282716 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.346183 containerd[1841]: time="2025-11-08T00:27:03.345717972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:03.346183 containerd[1841]: time="2025-11-08T00:27:03.345782773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:03.346183 containerd[1841]: time="2025-11-08T00:27:03.345802973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:03.348848 containerd[1841]: time="2025-11-08T00:27:03.348773304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:03.378661 kubelet[3381]: E1108 00:27:03.378637 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.378802 kubelet[3381]: W1108 00:27:03.378784 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.378907 kubelet[3381]: E1108 00:27:03.378892 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.379293 kubelet[3381]: E1108 00:27:03.379276 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.380549 kubelet[3381]: W1108 00:27:03.379393 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.380549 kubelet[3381]: E1108 00:27:03.379425 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.381041 kubelet[3381]: E1108 00:27:03.380932 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.381041 kubelet[3381]: W1108 00:27:03.380948 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.381041 kubelet[3381]: E1108 00:27:03.380970 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.381494 kubelet[3381]: E1108 00:27:03.381368 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.381494 kubelet[3381]: W1108 00:27:03.381383 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.381494 kubelet[3381]: E1108 00:27:03.381397 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.381931 kubelet[3381]: E1108 00:27:03.381794 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.381931 kubelet[3381]: W1108 00:27:03.381808 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.381931 kubelet[3381]: E1108 00:27:03.381821 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.382554 kubelet[3381]: E1108 00:27:03.382204 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.382554 kubelet[3381]: W1108 00:27:03.382217 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.382554 kubelet[3381]: E1108 00:27:03.382231 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.383071 kubelet[3381]: E1108 00:27:03.382861 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.383071 kubelet[3381]: W1108 00:27:03.382876 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.383071 kubelet[3381]: E1108 00:27:03.382891 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.383934 kubelet[3381]: E1108 00:27:03.383806 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.383934 kubelet[3381]: W1108 00:27:03.383822 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.383934 kubelet[3381]: E1108 00:27:03.383852 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.384185 kubelet[3381]: E1108 00:27:03.384144 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.384185 kubelet[3381]: W1108 00:27:03.384158 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.384816 kubelet[3381]: E1108 00:27:03.384727 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.385590 kubelet[3381]: E1108 00:27:03.384959 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.385590 kubelet[3381]: W1108 00:27:03.384974 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.385590 kubelet[3381]: E1108 00:27:03.385276 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.386107 kubelet[3381]: E1108 00:27:03.385930 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.386107 kubelet[3381]: W1108 00:27:03.385945 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.386391 kubelet[3381]: E1108 00:27:03.386258 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.386391 kubelet[3381]: W1108 00:27:03.386272 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.386391 kubelet[3381]: E1108 00:27:03.386388 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.387696 kubelet[3381]: E1108 00:27:03.386411 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.387860 kubelet[3381]: E1108 00:27:03.387820 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.387860 kubelet[3381]: W1108 00:27:03.387835 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.388095 kubelet[3381]: E1108 00:27:03.388057 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.388370 kubelet[3381]: E1108 00:27:03.388280 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.388370 kubelet[3381]: W1108 00:27:03.388293 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.388639 kubelet[3381]: E1108 00:27:03.388493 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.388871 kubelet[3381]: E1108 00:27:03.388774 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.388871 kubelet[3381]: W1108 00:27:03.388788 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.389095 kubelet[3381]: E1108 00:27:03.388989 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.389316 kubelet[3381]: E1108 00:27:03.389215 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.389316 kubelet[3381]: W1108 00:27:03.389228 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.389713 kubelet[3381]: E1108 00:27:03.389600 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.390333 kubelet[3381]: E1108 00:27:03.390318 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.390482 kubelet[3381]: W1108 00:27:03.390413 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.390482 kubelet[3381]: E1108 00:27:03.390448 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.391462 kubelet[3381]: E1108 00:27:03.391445 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.391677 kubelet[3381]: W1108 00:27:03.391573 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.392059 kubelet[3381]: E1108 00:27:03.391958 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.392059 kubelet[3381]: E1108 00:27:03.391872 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.392059 kubelet[3381]: W1108 00:27:03.391973 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.392335 kubelet[3381]: E1108 00:27:03.392106 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.393579 kubelet[3381]: E1108 00:27:03.392707 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.393579 kubelet[3381]: W1108 00:27:03.392722 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.393579 kubelet[3381]: E1108 00:27:03.392912 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.394030 kubelet[3381]: E1108 00:27:03.393795 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.394030 kubelet[3381]: W1108 00:27:03.393809 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.394030 kubelet[3381]: E1108 00:27:03.393899 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.394680 kubelet[3381]: E1108 00:27:03.394388 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.394680 kubelet[3381]: W1108 00:27:03.394403 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.395989 kubelet[3381]: E1108 00:27:03.395742 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.396260 kubelet[3381]: E1108 00:27:03.396145 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.396260 kubelet[3381]: W1108 00:27:03.396159 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.396390 kubelet[3381]: E1108 00:27:03.396376 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.396722 kubelet[3381]: E1108 00:27:03.396641 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.396722 kubelet[3381]: W1108 00:27:03.396657 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.396722 kubelet[3381]: E1108 00:27:03.396698 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.397841 kubelet[3381]: E1108 00:27:03.397785 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.397841 kubelet[3381]: W1108 00:27:03.397800 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.397841 kubelet[3381]: E1108 00:27:03.397813 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.417803 kubelet[3381]: E1108 00:27:03.417777 3381 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:27:03.417803 kubelet[3381]: W1108 00:27:03.417799 3381 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:27:03.417938 kubelet[3381]: E1108 00:27:03.417818 3381 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:27:03.447570 containerd[1841]: time="2025-11-08T00:27:03.447471154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbjxn,Uid:efff08da-06e9-4079-8b05-83afc4c12ee4,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\"" Nov 8 00:27:03.455557 containerd[1841]: time="2025-11-08T00:27:03.454126825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:27:03.505121 containerd[1841]: time="2025-11-08T00:27:03.505063267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cbb84f6b6-4jdzx,Uid:2aca93d1-7412-4b1a-b135-c7557c7f84aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2097f3fca12f2cf06579860dbdfe935257a9705c3ea20b4de124a7601e065f1\"" Nov 8 00:27:04.643010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179465883.mount: Deactivated successfully. Nov 8 00:27:04.761445 containerd[1841]: time="2025-11-08T00:27:04.761393929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:04.763923 containerd[1841]: time="2025-11-08T00:27:04.763672454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 8 00:27:04.766992 containerd[1841]: time="2025-11-08T00:27:04.766765687Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:04.771451 containerd[1841]: time="2025-11-08T00:27:04.771414036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:04.772122 containerd[1841]: time="2025-11-08T00:27:04.772084343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.317916918s" Nov 8 00:27:04.772338 containerd[1841]: time="2025-11-08T00:27:04.772225045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:27:04.773813 containerd[1841]: time="2025-11-08T00:27:04.773694860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:27:04.775671 containerd[1841]: time="2025-11-08T00:27:04.775643581Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:27:04.810725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972729150.mount: Deactivated successfully. Nov 8 00:27:04.814852 containerd[1841]: time="2025-11-08T00:27:04.814754197Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22dd8ff073d019bc0c1e51a1570014df2e56087f935c121d73e4bb6faef6db0e\"" Nov 8 00:27:04.815564 containerd[1841]: time="2025-11-08T00:27:04.815521305Z" level=info msg="StartContainer for \"22dd8ff073d019bc0c1e51a1570014df2e56087f935c121d73e4bb6faef6db0e\"" Nov 8 00:27:04.877724 containerd[1841]: time="2025-11-08T00:27:04.877682366Z" level=info msg="StartContainer for \"22dd8ff073d019bc0c1e51a1570014df2e56087f935c121d73e4bb6faef6db0e\" returns successfully" Nov 8 00:27:05.149306 kubelet[3381]: E1108 00:27:05.148449 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:05.740405 containerd[1841]: time="2025-11-08T00:27:05.740350942Z" level=info msg="shim disconnected" id=22dd8ff073d019bc0c1e51a1570014df2e56087f935c121d73e4bb6faef6db0e namespace=k8s.io Nov 8 00:27:05.740405 containerd[1841]: time="2025-11-08T00:27:05.740400042Z" level=warning msg="cleaning up after shim disconnected" id=22dd8ff073d019bc0c1e51a1570014df2e56087f935c121d73e4bb6faef6db0e namespace=k8s.io Nov 8 00:27:05.740405 containerd[1841]: time="2025-11-08T00:27:05.740410643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:27:07.148699 kubelet[3381]: E1108 00:27:07.148627 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:07.996488 containerd[1841]: time="2025-11-08T00:27:07.996435661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:07.999396 containerd[1841]: time="2025-11-08T00:27:07.999351991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 8 00:27:08.004565 containerd[1841]: time="2025-11-08T00:27:08.003080430Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:08.008390 containerd[1841]: time="2025-11-08T00:27:08.008347584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:08.009364 containerd[1841]: time="2025-11-08T00:27:08.009324394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.235590233s" Nov 8 00:27:08.009512 containerd[1841]: time="2025-11-08T00:27:08.009487996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:27:08.011270 containerd[1841]: time="2025-11-08T00:27:08.011239114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:27:08.021404 containerd[1841]: time="2025-11-08T00:27:08.021377020Z" level=info msg="CreateContainer within sandbox \"a2097f3fca12f2cf06579860dbdfe935257a9705c3ea20b4de124a7601e065f1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:27:08.048065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175333384.mount: Deactivated successfully. Nov 8 00:27:08.060333 containerd[1841]: time="2025-11-08T00:27:08.060302424Z" level=info msg="CreateContainer within sandbox \"a2097f3fca12f2cf06579860dbdfe935257a9705c3ea20b4de124a7601e065f1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"013eab5bbcabee2975af510902f14fb499406562345687386e39c02026a8900f\"" Nov 8 00:27:08.060890 containerd[1841]: time="2025-11-08T00:27:08.060865130Z" level=info msg="StartContainer for \"013eab5bbcabee2975af510902f14fb499406562345687386e39c02026a8900f\"" Nov 8 00:27:08.141277 containerd[1841]: time="2025-11-08T00:27:08.141231164Z" level=info msg="StartContainer for \"013eab5bbcabee2975af510902f14fb499406562345687386e39c02026a8900f\" returns successfully" Nov 8 00:27:09.150557 kubelet[3381]: E1108 00:27:09.149173 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:09.305815 kubelet[3381]: I1108 00:27:09.305751 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cbb84f6b6-4jdzx" podStartSLOduration=2.802747141 podStartE2EDuration="7.305730555s" podCreationTimestamp="2025-11-08 00:27:02 +0000 UTC" firstStartedPulling="2025-11-08 00:27:03.507522493 +0000 UTC m=+34.023766437" lastFinishedPulling="2025-11-08 00:27:08.010506007 +0000 UTC m=+38.526749851" observedRunningTime="2025-11-08 00:27:08.310923426 +0000 UTC m=+38.827167270" watchObservedRunningTime="2025-11-08 00:27:09.305730555 +0000 UTC m=+39.821974499" Nov 8 00:27:11.148146 kubelet[3381]: E1108 00:27:11.148082 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:11.158644 containerd[1841]: time="2025-11-08T00:27:11.158600193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:11.160474 containerd[1841]: time="2025-11-08T00:27:11.160419412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:27:11.163605 containerd[1841]: time="2025-11-08T00:27:11.163527744Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:11.167293 containerd[1841]: time="2025-11-08T00:27:11.167246283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:11.168643 containerd[1841]: time="2025-11-08T00:27:11.167921690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.156641375s" Nov 8 00:27:11.168643 containerd[1841]: time="2025-11-08T00:27:11.167963691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:27:11.171126 containerd[1841]: time="2025-11-08T00:27:11.171096823Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:27:11.201869 containerd[1841]: time="2025-11-08T00:27:11.201770842Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029\"" Nov 8 00:27:11.202419 containerd[1841]: time="2025-11-08T00:27:11.202390248Z" level=info msg="StartContainer for \"6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029\"" Nov 8 00:27:11.270483 containerd[1841]: time="2025-11-08T00:27:11.270441555Z" level=info msg="StartContainer for \"6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029\" returns successfully" Nov 8 00:27:12.887373 containerd[1841]: time="2025-11-08T00:27:12.887302842Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:27:12.894172 kubelet[3381]: I1108 00:27:12.892765 3381 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:27:12.924943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029-rootfs.mount: Deactivated successfully. Nov 8 00:27:12.979722 kubelet[3381]: I1108 00:27:12.979322 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsnz\" (UniqueName: \"kubernetes.io/projected/dc33bba6-34c3-4255-9ae9-b94ef472a17a-kube-api-access-fvsnz\") pod \"coredns-668d6bf9bc-5nvxf\" (UID: \"dc33bba6-34c3-4255-9ae9-b94ef472a17a\") " pod="kube-system/coredns-668d6bf9bc-5nvxf" Nov 8 00:27:12.979722 kubelet[3381]: I1108 00:27:12.979367 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14ad17b-6d81-4bc3-936c-20b5e88e9ac4-tigera-ca-bundle\") pod \"calico-kube-controllers-76c9ff5fff-tvhdb\" (UID: \"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4\") " pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" Nov 8 00:27:12.979722 kubelet[3381]: I1108 00:27:12.979398 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22022de4-7569-4e54-9627-bc50a2dfeb17-calico-apiserver-certs\") pod \"calico-apiserver-74c54bbcd-hkm9n\" (UID: \"22022de4-7569-4e54-9627-bc50a2dfeb17\") " pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" Nov 8 00:27:12.979722 kubelet[3381]: I1108 00:27:12.979423 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfx8\" (UniqueName: \"kubernetes.io/projected/22022de4-7569-4e54-9627-bc50a2dfeb17-kube-api-access-tgfx8\") pod \"calico-apiserver-74c54bbcd-hkm9n\" (UID: \"22022de4-7569-4e54-9627-bc50a2dfeb17\") " pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" Nov 8 00:27:12.979722 kubelet[3381]: I1108 00:27:12.979448 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9436a-adbc-4a13-bbea-b53d5615fa79-config\") pod \"goldmane-666569f655-49g92\" (UID: \"55c9436a-adbc-4a13-bbea-b53d5615fa79\") " pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:12.980111 kubelet[3381]: I1108 00:27:12.979473 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc33bba6-34c3-4255-9ae9-b94ef472a17a-config-volume\") pod \"coredns-668d6bf9bc-5nvxf\" (UID: \"dc33bba6-34c3-4255-9ae9-b94ef472a17a\") " pod="kube-system/coredns-668d6bf9bc-5nvxf" Nov 8 00:27:12.980111 kubelet[3381]: I1108 00:27:12.979498 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzf2\" (UniqueName: \"kubernetes.io/projected/a34a1c39-70fa-4702-b13c-76dd4159174f-kube-api-access-wzzf2\") pod \"coredns-668d6bf9bc-dlmrh\" (UID: \"a34a1c39-70fa-4702-b13c-76dd4159174f\") " pod="kube-system/coredns-668d6bf9bc-dlmrh" Nov 8 00:27:12.980111 kubelet[3381]: I1108 00:27:12.979521 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-backend-key-pair\") pod \"whisker-68bf965c6-fvbxd\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " pod="calico-system/whisker-68bf965c6-fvbxd" Nov 8 00:27:12.984874 kubelet[3381]: I1108 00:27:12.980321 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/55c9436a-adbc-4a13-bbea-b53d5615fa79-goldmane-key-pair\") pod \"goldmane-666569f655-49g92\" (UID: \"55c9436a-adbc-4a13-bbea-b53d5615fa79\") " pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:12.984874 kubelet[3381]: I1108 00:27:12.980649 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2bc4797e-b4d5-4e92-8a88-33c63b1aa854-calico-apiserver-certs\") pod \"calico-apiserver-74c54bbcd-68mbw\" (UID: \"2bc4797e-b4d5-4e92-8a88-33c63b1aa854\") " pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" Nov 8 00:27:12.984874 kubelet[3381]: I1108 00:27:12.980677 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34a1c39-70fa-4702-b13c-76dd4159174f-config-volume\") pod \"coredns-668d6bf9bc-dlmrh\" (UID: \"a34a1c39-70fa-4702-b13c-76dd4159174f\") " pod="kube-system/coredns-668d6bf9bc-dlmrh" Nov 8 00:27:12.984874 kubelet[3381]: I1108 00:27:12.980700 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqmj\" (UniqueName: \"kubernetes.io/projected/fb5221fc-e33f-46be-85b7-15f7219674b7-kube-api-access-frqmj\") pod \"whisker-68bf965c6-fvbxd\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " pod="calico-system/whisker-68bf965c6-fvbxd" Nov 8 00:27:12.984874 kubelet[3381]: I1108 00:27:12.980733 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kx9\" (UniqueName: \"kubernetes.io/projected/c14ad17b-6d81-4bc3-936c-20b5e88e9ac4-kube-api-access-x4kx9\") pod \"calico-kube-controllers-76c9ff5fff-tvhdb\" (UID: \"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4\") " pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" Nov 8 00:27:12.985087 kubelet[3381]: I1108 00:27:12.980757 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-ca-bundle\") pod \"whisker-68bf965c6-fvbxd\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " pod="calico-system/whisker-68bf965c6-fvbxd" Nov 8 00:27:12.985087 kubelet[3381]: I1108 00:27:12.980831 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c9436a-adbc-4a13-bbea-b53d5615fa79-goldmane-ca-bundle\") pod \"goldmane-666569f655-49g92\" (UID: \"55c9436a-adbc-4a13-bbea-b53d5615fa79\") " pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:12.985087 kubelet[3381]: I1108 00:27:12.980855 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkbkd\" (UniqueName: \"kubernetes.io/projected/55c9436a-adbc-4a13-bbea-b53d5615fa79-kube-api-access-pkbkd\") pod \"goldmane-666569f655-49g92\" (UID: \"55c9436a-adbc-4a13-bbea-b53d5615fa79\") " pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:12.985087 kubelet[3381]: I1108 00:27:12.980878 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg77\" (UniqueName: \"kubernetes.io/projected/2bc4797e-b4d5-4e92-8a88-33c63b1aa854-kube-api-access-5rg77\") pod \"calico-apiserver-74c54bbcd-68mbw\" (UID: \"2bc4797e-b4d5-4e92-8a88-33c63b1aa854\") " pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" Nov 8 00:27:13.152148 containerd[1841]: time="2025-11-08T00:27:13.151176482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wt8ss,Uid:a8e41043-2a66-4025-a79c-fc0f732e85fb,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:13.255274 containerd[1841]: time="2025-11-08T00:27:13.255232263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlmrh,Uid:a34a1c39-70fa-4702-b13c-76dd4159174f,Namespace:kube-system,Attempt:0,}" Nov 8 00:27:13.267352 containerd[1841]: time="2025-11-08T00:27:13.267308688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c9ff5fff-tvhdb,Uid:c14ad17b-6d81-4bc3-936c-20b5e88e9ac4,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:13.270518 containerd[1841]: time="2025-11-08T00:27:13.270468421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bf965c6-fvbxd,Uid:fb5221fc-e33f-46be-85b7-15f7219674b7,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:13.276000 containerd[1841]: time="2025-11-08T00:27:13.275964278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nvxf,Uid:dc33bba6-34c3-4255-9ae9-b94ef472a17a,Namespace:kube-system,Attempt:0,}" Nov 8 00:27:13.301132 containerd[1841]: time="2025-11-08T00:27:13.301105439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-hkm9n,Uid:22022de4-7569-4e54-9627-bc50a2dfeb17,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:27:13.307076 containerd[1841]: time="2025-11-08T00:27:13.307043300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-49g92,Uid:55c9436a-adbc-4a13-bbea-b53d5615fa79,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:13.307325 containerd[1841]: time="2025-11-08T00:27:13.307296803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-68mbw,Uid:2bc4797e-b4d5-4e92-8a88-33c63b1aa854,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:27:14.813564 containerd[1841]: time="2025-11-08T00:27:14.813462642Z" level=info msg="shim disconnected" id=6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029 namespace=k8s.io Nov 8 00:27:14.813564 containerd[1841]: time="2025-11-08T00:27:14.813549442Z" level=warning msg="cleaning up after shim disconnected" id=6db1d6f03288679f40db1e12bfd471f8e58b88dd32e3f56ea1d5fbd05d6e5029 namespace=k8s.io Nov 8 00:27:14.813564 containerd[1841]: time="2025-11-08T00:27:14.813565943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:27:15.224627 containerd[1841]: time="2025-11-08T00:27:15.223684455Z" level=error msg="Failed to destroy network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.224627 containerd[1841]: time="2025-11-08T00:27:15.224071659Z" level=error msg="encountered an error cleaning up failed sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.224627 containerd[1841]: time="2025-11-08T00:27:15.224136660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c9ff5fff-tvhdb,Uid:c14ad17b-6d81-4bc3-936c-20b5e88e9ac4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.225933 kubelet[3381]: E1108 00:27:15.224454 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.225933 kubelet[3381]: E1108 00:27:15.224558 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" Nov 8 00:27:15.225933 kubelet[3381]: E1108 00:27:15.224593 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" Nov 8 00:27:15.226444 kubelet[3381]: E1108 00:27:15.224653 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:15.228661 containerd[1841]: time="2025-11-08T00:27:15.228622704Z" level=error msg="Failed to destroy network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.229160 containerd[1841]: time="2025-11-08T00:27:15.229119709Z" level=error msg="encountered an error cleaning up failed sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.229328 containerd[1841]: time="2025-11-08T00:27:15.229292911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-hkm9n,Uid:22022de4-7569-4e54-9627-bc50a2dfeb17,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.229732 kubelet[3381]: E1108 00:27:15.229699 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.229950 kubelet[3381]: E1108 00:27:15.229888 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" Nov 8 00:27:15.230114 kubelet[3381]: E1108 00:27:15.230090 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" Nov 8 00:27:15.230303 kubelet[3381]: E1108 00:27:15.230241 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:15.233183 containerd[1841]: time="2025-11-08T00:27:15.233147849Z" level=error msg="Failed to destroy network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.233820 containerd[1841]: time="2025-11-08T00:27:15.233597853Z" level=error msg="encountered an error cleaning up failed sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.233820 containerd[1841]: time="2025-11-08T00:27:15.233672454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wt8ss,Uid:a8e41043-2a66-4025-a79c-fc0f732e85fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.233959 kubelet[3381]: E1108 00:27:15.233879 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.233959 kubelet[3381]: E1108 00:27:15.233922 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:15.233959 kubelet[3381]: E1108 00:27:15.233947 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wt8ss" Nov 8 00:27:15.234088 kubelet[3381]: E1108 00:27:15.233986 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:15.261959 containerd[1841]: time="2025-11-08T00:27:15.261474830Z" level=error msg="Failed to destroy network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.264494 containerd[1841]: time="2025-11-08T00:27:15.263091046Z" level=error msg="encountered an error cleaning up failed sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.265453 containerd[1841]: time="2025-11-08T00:27:15.265320768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-49g92,Uid:55c9436a-adbc-4a13-bbea-b53d5615fa79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.267325 kubelet[3381]: E1108 00:27:15.266445 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.267325 kubelet[3381]: E1108 00:27:15.266496 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:15.267325 kubelet[3381]: E1108 00:27:15.266522 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-49g92" Nov 8 00:27:15.267575 kubelet[3381]: E1108 00:27:15.266599 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:15.269888 containerd[1841]: time="2025-11-08T00:27:15.269729512Z" level=error msg="Failed to destroy network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.272609 containerd[1841]: time="2025-11-08T00:27:15.272579040Z" level=error msg="encountered an error cleaning up failed sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.272785 containerd[1841]: time="2025-11-08T00:27:15.272739742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlmrh,Uid:a34a1c39-70fa-4702-b13c-76dd4159174f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.273427 kubelet[3381]: E1108 00:27:15.273010 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.273427 kubelet[3381]: E1108 00:27:15.273072 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dlmrh" Nov 8 00:27:15.273427 kubelet[3381]: E1108 00:27:15.273100 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dlmrh" Nov 8 00:27:15.273669 kubelet[3381]: E1108 00:27:15.273161 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dlmrh_kube-system(a34a1c39-70fa-4702-b13c-76dd4159174f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dlmrh_kube-system(a34a1c39-70fa-4702-b13c-76dd4159174f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dlmrh" podUID="a34a1c39-70fa-4702-b13c-76dd4159174f" Nov 8 00:27:15.276871 containerd[1841]: time="2025-11-08T00:27:15.276761081Z" level=error msg="Failed to destroy network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.277191 containerd[1841]: time="2025-11-08T00:27:15.277158985Z" level=error msg="encountered an error cleaning up failed sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.277439 containerd[1841]: time="2025-11-08T00:27:15.277298387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-68mbw,Uid:2bc4797e-b4d5-4e92-8a88-33c63b1aa854,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.277554 kubelet[3381]: E1108 00:27:15.277451 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.277554 kubelet[3381]: E1108 00:27:15.277504 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" Nov 8 00:27:15.277662 kubelet[3381]: E1108 00:27:15.277578 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" Nov 8 00:27:15.277709 kubelet[3381]: E1108 00:27:15.277642 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:15.282571 containerd[1841]: time="2025-11-08T00:27:15.281325627Z" level=error msg="Failed to destroy network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.282571 containerd[1841]: time="2025-11-08T00:27:15.282034834Z" level=error msg="encountered an error cleaning up failed sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.282571 containerd[1841]: time="2025-11-08T00:27:15.282108335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bf965c6-fvbxd,Uid:fb5221fc-e33f-46be-85b7-15f7219674b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.282739 kubelet[3381]: E1108 00:27:15.282290 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.282739 kubelet[3381]: E1108 00:27:15.282380 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68bf965c6-fvbxd" Nov 8 00:27:15.282739 kubelet[3381]: E1108 00:27:15.282419 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68bf965c6-fvbxd" Nov 8 00:27:15.282881 kubelet[3381]: E1108 00:27:15.282465 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68bf965c6-fvbxd_calico-system(fb5221fc-e33f-46be-85b7-15f7219674b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68bf965c6-fvbxd_calico-system(fb5221fc-e33f-46be-85b7-15f7219674b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68bf965c6-fvbxd" podUID="fb5221fc-e33f-46be-85b7-15f7219674b7" Nov 8 00:27:15.284709 containerd[1841]: time="2025-11-08T00:27:15.284661760Z" level=error msg="Failed to destroy network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.285283 containerd[1841]: time="2025-11-08T00:27:15.285114164Z" level=error msg="encountered an error cleaning up failed sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.285283 containerd[1841]: time="2025-11-08T00:27:15.285179765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nvxf,Uid:dc33bba6-34c3-4255-9ae9-b94ef472a17a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.285732 kubelet[3381]: E1108 00:27:15.285379 3381 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.285732 kubelet[3381]: E1108 00:27:15.285436 3381 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nvxf" Nov 8 00:27:15.285732 kubelet[3381]: E1108 00:27:15.285456 3381 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nvxf" Nov 8 00:27:15.285965 kubelet[3381]: E1108 00:27:15.285660 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5nvxf_kube-system(dc33bba6-34c3-4255-9ae9-b94ef472a17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5nvxf_kube-system(dc33bba6-34c3-4255-9ae9-b94ef472a17a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nvxf" podUID="dc33bba6-34c3-4255-9ae9-b94ef472a17a" Nov 8 00:27:15.298727 kubelet[3381]: I1108 00:27:15.298705 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:15.299843 containerd[1841]: time="2025-11-08T00:27:15.299379106Z" level=info msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" Nov 8 00:27:15.299843 containerd[1841]: time="2025-11-08T00:27:15.299577508Z" level=info msg="Ensure that sandbox 705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6 in task-service has been cleanup successfully" Nov 8 00:27:15.301950 kubelet[3381]: I1108 00:27:15.301831 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:15.302203 containerd[1841]: time="2025-11-08T00:27:15.302177634Z" level=info msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" Nov 8 00:27:15.302377 containerd[1841]: time="2025-11-08T00:27:15.302340235Z" level=info msg="Ensure that sandbox 5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b in task-service has been cleanup successfully" Nov 8 00:27:15.303233 kubelet[3381]: I1108 00:27:15.303218 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:15.304373 containerd[1841]: time="2025-11-08T00:27:15.303911351Z" level=info msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" Nov 8 00:27:15.304373 containerd[1841]: time="2025-11-08T00:27:15.304096153Z" level=info msg="Ensure that sandbox 717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c in task-service has been cleanup successfully" Nov 8 00:27:15.307658 kubelet[3381]: I1108 00:27:15.307638 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:15.310231 containerd[1841]: time="2025-11-08T00:27:15.309939111Z" level=info msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" Nov 8 00:27:15.312492 containerd[1841]: time="2025-11-08T00:27:15.312466236Z" level=info msg="Ensure that sandbox c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4 in task-service has been cleanup successfully" Nov 8 00:27:15.316580 kubelet[3381]: I1108 00:27:15.315985 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:15.317920 containerd[1841]: time="2025-11-08T00:27:15.317874889Z" level=info msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" Nov 8 00:27:15.321497 containerd[1841]: time="2025-11-08T00:27:15.321467825Z" level=info msg="Ensure that sandbox 866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd in task-service has been cleanup successfully" Nov 8 00:27:15.323026 kubelet[3381]: I1108 00:27:15.322987 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:15.325680 containerd[1841]: time="2025-11-08T00:27:15.325655266Z" level=info msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" Nov 8 00:27:15.325849 containerd[1841]: time="2025-11-08T00:27:15.325817468Z" level=info msg="Ensure that sandbox dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d in task-service has been cleanup successfully" Nov 8 00:27:15.333786 kubelet[3381]: I1108 00:27:15.333377 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:15.334551 containerd[1841]: time="2025-11-08T00:27:15.334047050Z" level=info msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" Nov 8 00:27:15.334551 containerd[1841]: time="2025-11-08T00:27:15.334261352Z" level=info msg="Ensure that sandbox 76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7 in task-service has been cleanup successfully" Nov 8 00:27:15.348219 containerd[1841]: time="2025-11-08T00:27:15.348183890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:27:15.349371 kubelet[3381]: I1108 00:27:15.349005 3381 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:15.351918 containerd[1841]: time="2025-11-08T00:27:15.351291121Z" level=info msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" Nov 8 00:27:15.359894 containerd[1841]: time="2025-11-08T00:27:15.359668004Z" level=info msg="Ensure that sandbox 3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3 in task-service has been cleanup successfully" Nov 8 00:27:15.427733 containerd[1841]: time="2025-11-08T00:27:15.427680578Z" level=error msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" failed" error="failed to destroy network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.428175 kubelet[3381]: E1108 00:27:15.428133 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:15.428414 kubelet[3381]: E1108 00:27:15.428350 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c"} Nov 8 00:27:15.428589 kubelet[3381]: E1108 00:27:15.428529 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55c9436a-adbc-4a13-bbea-b53d5615fa79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.428793 kubelet[3381]: E1108 00:27:15.428765 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55c9436a-adbc-4a13-bbea-b53d5615fa79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:15.444664 containerd[1841]: time="2025-11-08T00:27:15.444610746Z" level=error msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" failed" error="failed to destroy network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.444882 kubelet[3381]: E1108 00:27:15.444833 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:15.444961 kubelet[3381]: E1108 00:27:15.444895 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd"} Nov 8 00:27:15.444961 kubelet[3381]: E1108 00:27:15.444936 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb5221fc-e33f-46be-85b7-15f7219674b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.445097 kubelet[3381]: E1108 00:27:15.444969 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb5221fc-e33f-46be-85b7-15f7219674b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68bf965c6-fvbxd" podUID="fb5221fc-e33f-46be-85b7-15f7219674b7" Nov 8 00:27:15.453830 containerd[1841]: time="2025-11-08T00:27:15.453379133Z" level=error msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" failed" error="failed to destroy network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.453926 kubelet[3381]: E1108 00:27:15.453690 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:15.453926 kubelet[3381]: E1108 00:27:15.453734 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d"} Nov 8 00:27:15.453926 kubelet[3381]: E1108 00:27:15.453779 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8e41043-2a66-4025-a79c-fc0f732e85fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.454170 kubelet[3381]: E1108 00:27:15.454130 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8e41043-2a66-4025-a79c-fc0f732e85fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:15.461918 containerd[1841]: time="2025-11-08T00:27:15.461878017Z" level=error msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" failed" error="failed to destroy network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.462209 kubelet[3381]: E1108 00:27:15.462179 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:15.462295 kubelet[3381]: E1108 00:27:15.462215 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6"} Nov 8 00:27:15.462295 kubelet[3381]: E1108 00:27:15.462249 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bc4797e-b4d5-4e92-8a88-33c63b1aa854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.462295 kubelet[3381]: E1108 00:27:15.462283 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bc4797e-b4d5-4e92-8a88-33c63b1aa854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:15.465856 containerd[1841]: time="2025-11-08T00:27:15.465819856Z" level=error msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" failed" error="failed to destroy network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.466169 kubelet[3381]: E1108 00:27:15.466141 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:15.466887 kubelet[3381]: E1108 00:27:15.466266 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b"} Nov 8 00:27:15.466887 kubelet[3381]: E1108 00:27:15.466304 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.466887 kubelet[3381]: E1108 00:27:15.466332 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:15.480766 containerd[1841]: time="2025-11-08T00:27:15.480673504Z" level=error msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" failed" error="failed to destroy network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.481837 kubelet[3381]: E1108 00:27:15.481803 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:15.483012 kubelet[3381]: E1108 00:27:15.482693 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4"} Nov 8 00:27:15.483012 kubelet[3381]: E1108 00:27:15.482745 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22022de4-7569-4e54-9627-bc50a2dfeb17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.483012 kubelet[3381]: E1108 00:27:15.482778 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22022de4-7569-4e54-9627-bc50a2dfeb17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:15.487689 containerd[1841]: time="2025-11-08T00:27:15.487654373Z" level=error msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" failed" error="failed to destroy network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.488134 kubelet[3381]: E1108 00:27:15.488083 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:15.488371 kubelet[3381]: E1108 00:27:15.488259 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3"} Nov 8 00:27:15.488371 kubelet[3381]: E1108 00:27:15.488319 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc33bba6-34c3-4255-9ae9-b94ef472a17a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.488371 kubelet[3381]: E1108 00:27:15.488345 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc33bba6-34c3-4255-9ae9-b94ef472a17a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nvxf" podUID="dc33bba6-34c3-4255-9ae9-b94ef472a17a" Nov 8 00:27:15.491080 containerd[1841]: time="2025-11-08T00:27:15.491042606Z" level=error msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" failed" error="failed to destroy network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:27:15.491327 kubelet[3381]: E1108 00:27:15.491184 3381 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:15.491327 kubelet[3381]: E1108 00:27:15.491240 3381 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7"} Nov 8 00:27:15.491327 kubelet[3381]: E1108 00:27:15.491263 3381 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a34a1c39-70fa-4702-b13c-76dd4159174f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:27:15.491327 kubelet[3381]: E1108 00:27:15.491279 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a34a1c39-70fa-4702-b13c-76dd4159174f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dlmrh" podUID="a34a1c39-70fa-4702-b13c-76dd4159174f" Nov 8 00:27:15.920302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b-shm.mount: Deactivated successfully. Nov 8 00:27:15.920494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7-shm.mount: Deactivated successfully. Nov 8 00:27:15.920703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d-shm.mount: Deactivated successfully. Nov 8 00:27:21.362356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134001277.mount: Deactivated successfully. Nov 8 00:27:21.396414 containerd[1841]: time="2025-11-08T00:27:21.396309468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:21.398723 containerd[1841]: time="2025-11-08T00:27:21.398648691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:27:21.400992 containerd[1841]: time="2025-11-08T00:27:21.400936214Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:21.408629 containerd[1841]: time="2025-11-08T00:27:21.408596590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:27:21.409742 containerd[1841]: time="2025-11-08T00:27:21.409212696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.057903075s" Nov 8 00:27:21.409742 containerd[1841]: time="2025-11-08T00:27:21.409252696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:27:21.421612 containerd[1841]: time="2025-11-08T00:27:21.421575818Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:27:21.459871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702676045.mount: Deactivated successfully. Nov 8 00:27:21.468914 containerd[1841]: time="2025-11-08T00:27:21.468877287Z" level=info msg="CreateContainer within sandbox \"cc4bec0cacd2c679b492bc999da4fb94f0f3f00e8f1f26ce768a94e43ace2e20\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1\"" Nov 8 00:27:21.470576 containerd[1841]: time="2025-11-08T00:27:21.470548204Z" level=info msg="StartContainer for \"8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1\"" Nov 8 00:27:21.524704 containerd[1841]: time="2025-11-08T00:27:21.524592140Z" level=info msg="StartContainer for \"8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1\" returns successfully" Nov 8 00:27:21.776366 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:27:21.776491 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:27:21.895328 containerd[1841]: time="2025-11-08T00:27:21.893852102Z" level=info msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.982 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.982 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" iface="eth0" netns="/var/run/netns/cni-48ae79a6-d2dd-d45c-ed86-6d2d475f40b8" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.983 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" iface="eth0" netns="/var/run/netns/cni-48ae79a6-d2dd-d45c-ed86-6d2d475f40b8" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.983 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" iface="eth0" netns="/var/run/netns/cni-48ae79a6-d2dd-d45c-ed86-6d2d475f40b8" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.983 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:21.983 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.021 [INFO][4553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.022 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.022 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.027 [WARNING][4553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.027 [INFO][4553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.029 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:22.034747 containerd[1841]: 2025-11-08 00:27:22.032 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:22.034747 containerd[1841]: time="2025-11-08T00:27:22.034630698Z" level=info msg="TearDown network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" successfully" Nov 8 00:27:22.034747 containerd[1841]: time="2025-11-08T00:27:22.034663498Z" level=info msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" returns successfully" Nov 8 00:27:22.144420 kubelet[3381]: I1108 00:27:22.144381 3381 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqmj\" (UniqueName: \"kubernetes.io/projected/fb5221fc-e33f-46be-85b7-15f7219674b7-kube-api-access-frqmj\") pod \"fb5221fc-e33f-46be-85b7-15f7219674b7\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " Nov 8 00:27:22.144919 kubelet[3381]: I1108 00:27:22.144435 3381 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-backend-key-pair\") pod \"fb5221fc-e33f-46be-85b7-15f7219674b7\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " Nov 8 00:27:22.144919 kubelet[3381]: I1108 00:27:22.144465 3381 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-ca-bundle\") pod \"fb5221fc-e33f-46be-85b7-15f7219674b7\" (UID: \"fb5221fc-e33f-46be-85b7-15f7219674b7\") " Nov 8 00:27:22.145019 kubelet[3381]: I1108 00:27:22.144908 3381 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fb5221fc-e33f-46be-85b7-15f7219674b7" (UID: "fb5221fc-e33f-46be-85b7-15f7219674b7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:27:22.148074 kubelet[3381]: I1108 00:27:22.148040 3381 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5221fc-e33f-46be-85b7-15f7219674b7-kube-api-access-frqmj" (OuterVolumeSpecName: "kube-api-access-frqmj") pod "fb5221fc-e33f-46be-85b7-15f7219674b7" (UID: "fb5221fc-e33f-46be-85b7-15f7219674b7"). InnerVolumeSpecName "kube-api-access-frqmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:27:22.149030 kubelet[3381]: I1108 00:27:22.148971 3381 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fb5221fc-e33f-46be-85b7-15f7219674b7" (UID: "fb5221fc-e33f-46be-85b7-15f7219674b7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:27:22.245967 kubelet[3381]: I1108 00:27:22.245919 3381 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frqmj\" (UniqueName: \"kubernetes.io/projected/fb5221fc-e33f-46be-85b7-15f7219674b7-kube-api-access-frqmj\") on node \"ci-4081.3.6-n-75d3e74165\" DevicePath \"\"" Nov 8 00:27:22.245967 kubelet[3381]: I1108 00:27:22.245961 3381 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-75d3e74165\" DevicePath \"\"" Nov 8 00:27:22.245967 kubelet[3381]: I1108 00:27:22.245980 3381 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb5221fc-e33f-46be-85b7-15f7219674b7-whisker-ca-bundle\") on node \"ci-4081.3.6-n-75d3e74165\" DevicePath \"\"" Nov 8 00:27:22.363898 systemd[1]: run-netns-cni\x2d48ae79a6\x2dd2dd\x2dd45c\x2ded86\x2d6d2d475f40b8.mount: Deactivated successfully. Nov 8 00:27:22.364103 systemd[1]: var-lib-kubelet-pods-fb5221fc\x2de33f\x2d46be\x2d85b7\x2d15f7219674b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrqmj.mount: Deactivated successfully. Nov 8 00:27:22.364283 systemd[1]: var-lib-kubelet-pods-fb5221fc\x2de33f\x2d46be\x2d85b7\x2d15f7219674b7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:27:22.390891 kubelet[3381]: I1108 00:27:22.388959 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gbjxn" podStartSLOduration=2.431420315 podStartE2EDuration="20.388939111s" podCreationTimestamp="2025-11-08 00:27:02 +0000 UTC" firstStartedPulling="2025-11-08 00:27:03.452421907 +0000 UTC m=+33.968665851" lastFinishedPulling="2025-11-08 00:27:21.409940703 +0000 UTC m=+51.926184647" observedRunningTime="2025-11-08 00:27:22.388618208 +0000 UTC m=+52.904862052" watchObservedRunningTime="2025-11-08 00:27:22.388939111 +0000 UTC m=+52.905183055" Nov 8 00:27:22.549691 kubelet[3381]: I1108 00:27:22.549632 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47c77009-fff2-4caa-920d-906bda818400-whisker-backend-key-pair\") pod \"whisker-795578dc98-zld7l\" (UID: \"47c77009-fff2-4caa-920d-906bda818400\") " pod="calico-system/whisker-795578dc98-zld7l" Nov 8 00:27:22.549691 kubelet[3381]: I1108 00:27:22.549695 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9wh\" (UniqueName: \"kubernetes.io/projected/47c77009-fff2-4caa-920d-906bda818400-kube-api-access-5c9wh\") pod \"whisker-795578dc98-zld7l\" (UID: \"47c77009-fff2-4caa-920d-906bda818400\") " pod="calico-system/whisker-795578dc98-zld7l" Nov 8 00:27:22.549928 kubelet[3381]: I1108 00:27:22.549732 3381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c77009-fff2-4caa-920d-906bda818400-whisker-ca-bundle\") pod \"whisker-795578dc98-zld7l\" (UID: \"47c77009-fff2-4caa-920d-906bda818400\") " pod="calico-system/whisker-795578dc98-zld7l" Nov 8 00:27:22.765245 containerd[1841]: time="2025-11-08T00:27:22.765097842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795578dc98-zld7l,Uid:47c77009-fff2-4caa-920d-906bda818400,Namespace:calico-system,Attempt:0,}" Nov 8 00:27:22.902700 systemd-networkd[1393]: calif039b28e661: Link UP Nov 8 00:27:22.902948 systemd-networkd[1393]: calif039b28e661: Gained carrier Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.825 [INFO][4575] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.833 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0 whisker-795578dc98- calico-system 47c77009-fff2-4caa-920d-906bda818400 897 0 2025-11-08 00:27:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:795578dc98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 whisker-795578dc98-zld7l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif039b28e661 [] [] }} ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.833 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.858 [INFO][4587] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" HandleID="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.858 [INFO][4587] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" HandleID="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"whisker-795578dc98-zld7l", "timestamp":"2025-11-08 00:27:22.858782871 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.858 [INFO][4587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.859 [INFO][4587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.859 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.864 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.868 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.871 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.872 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.874 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.874 [INFO][4587] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.875 [INFO][4587] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06 Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.882 [INFO][4587] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.889 [INFO][4587] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.193/26] block=192.168.109.192/26 handle="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.889 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.193/26] handle="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.889 [INFO][4587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:22.920327 containerd[1841]: 2025-11-08 00:27:22.889 [INFO][4587] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.193/26] IPv6=[] ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" HandleID="k8s-pod-network.1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.891 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0", GenerateName:"whisker-795578dc98-", Namespace:"calico-system", SelfLink:"", UID:"47c77009-fff2-4caa-920d-906bda818400", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795578dc98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"whisker-795578dc98-zld7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif039b28e661", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.891 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.193/32] ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.891 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif039b28e661 ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.902 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.902 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0", GenerateName:"whisker-795578dc98-", Namespace:"calico-system", SelfLink:"", UID:"47c77009-fff2-4caa-920d-906bda818400", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795578dc98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06", Pod:"whisker-795578dc98-zld7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif039b28e661", MAC:"42:80:77:4e:38:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:22.921745 containerd[1841]: 2025-11-08 00:27:22.919 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06" Namespace="calico-system" Pod="whisker-795578dc98-zld7l" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--795578dc98--zld7l-eth0" Nov 8 00:27:22.940439 containerd[1841]: time="2025-11-08T00:27:22.940063279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:22.940439 containerd[1841]: time="2025-11-08T00:27:22.940108979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:22.940439 containerd[1841]: time="2025-11-08T00:27:22.940118979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:22.940439 containerd[1841]: time="2025-11-08T00:27:22.940377582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:22.994603 containerd[1841]: time="2025-11-08T00:27:22.994559025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795578dc98-zld7l,Uid:47c77009-fff2-4caa-920d-906bda818400,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bc65fe974efa510bb9caeb3e637f5c6fe4398a0e9b0973825a825c29556af06\"" Nov 8 00:27:22.996826 containerd[1841]: time="2025-11-08T00:27:22.996747747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:23.239086 containerd[1841]: time="2025-11-08T00:27:23.238669272Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:23.241566 containerd[1841]: time="2025-11-08T00:27:23.241182797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:23.241566 containerd[1841]: time="2025-11-08T00:27:23.241307498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:23.242832 kubelet[3381]: E1108 00:27:23.241877 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:23.242832 kubelet[3381]: E1108 00:27:23.241947 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:23.243287 kubelet[3381]: E1108 00:27:23.242122 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c9b09b9c8814b659b69c4a1e963dea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:23.246648 containerd[1841]: time="2025-11-08T00:27:23.245782843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:23.498099 containerd[1841]: time="2025-11-08T00:27:23.497868570Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:23.501149 containerd[1841]: time="2025-11-08T00:27:23.500437996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:23.501149 containerd[1841]: time="2025-11-08T00:27:23.500567297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:23.502211 kubelet[3381]: E1108 00:27:23.501919 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:23.502211 kubelet[3381]: E1108 00:27:23.502016 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:23.502742 kubelet[3381]: E1108 00:27:23.502633 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:23.504548 kubelet[3381]: E1108 00:27:23.503845 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:27:23.541609 kernel: bpftool[4761]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:27:23.835646 systemd-networkd[1393]: vxlan.calico: Link UP Nov 8 00:27:23.835656 systemd-networkd[1393]: vxlan.calico: Gained carrier Nov 8 00:27:24.154723 kubelet[3381]: I1108 00:27:24.152781 3381 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5221fc-e33f-46be-85b7-15f7219674b7" path="/var/lib/kubelet/pods/fb5221fc-e33f-46be-85b7-15f7219674b7/volumes" Nov 8 00:27:24.219703 systemd-networkd[1393]: calif039b28e661: Gained IPv6LL Nov 8 00:27:24.383290 kubelet[3381]: E1108 00:27:24.383151 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:27:25.818781 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Nov 8 00:27:27.149414 containerd[1841]: time="2025-11-08T00:27:27.149356170Z" level=info msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" Nov 8 00:27:27.150862 containerd[1841]: time="2025-11-08T00:27:27.150122378Z" level=info msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" Nov 8 00:27:27.152743 containerd[1841]: time="2025-11-08T00:27:27.152704504Z" level=info msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" Nov 8 00:27:27.154657 containerd[1841]: time="2025-11-08T00:27:27.154598223Z" level=info msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" iface="eth0" netns="/var/run/netns/cni-f11f75ef-29e6-a523-7405-d9455698f84a" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" iface="eth0" netns="/var/run/netns/cni-f11f75ef-29e6-a523-7405-d9455698f84a" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" iface="eth0" netns="/var/run/netns/cni-f11f75ef-29e6-a523-7405-d9455698f84a" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.242 [INFO][4871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.297 [INFO][4901] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.297 [INFO][4901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.297 [INFO][4901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.315 [WARNING][4901] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.315 [INFO][4901] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.320 [INFO][4901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.326212 containerd[1841]: 2025-11-08 00:27:27.322 [INFO][4871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:27.337645 containerd[1841]: time="2025-11-08T00:27:27.331730898Z" level=info msg="TearDown network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" successfully" Nov 8 00:27:27.337645 containerd[1841]: time="2025-11-08T00:27:27.332595007Z" level=info msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" returns successfully" Nov 8 00:27:27.339009 systemd[1]: run-netns-cni\x2df11f75ef\x2d29e6\x2da523\x2d7405\x2dd9455698f84a.mount: Deactivated successfully. Nov 8 00:27:27.343003 containerd[1841]: time="2025-11-08T00:27:27.342962811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-68mbw,Uid:2bc4797e-b4d5-4e92-8a88-33c63b1aa854,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.293 [INFO][4888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.297 [INFO][4888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" iface="eth0" netns="/var/run/netns/cni-e18aab59-4f7c-77da-86ed-7b82ed78b406" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.298 [INFO][4888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" iface="eth0" netns="/var/run/netns/cni-e18aab59-4f7c-77da-86ed-7b82ed78b406" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.300 [INFO][4888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" iface="eth0" netns="/var/run/netns/cni-e18aab59-4f7c-77da-86ed-7b82ed78b406" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.300 [INFO][4888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.303 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.363 [INFO][4913] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.363 [INFO][4913] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.363 [INFO][4913] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.372 [WARNING][4913] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.372 [INFO][4913] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.374 [INFO][4913] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.383682 containerd[1841]: 2025-11-08 00:27:27.378 [INFO][4888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:27.389003 containerd[1841]: time="2025-11-08T00:27:27.384067723Z" level=info msg="TearDown network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" successfully" Nov 8 00:27:27.389003 containerd[1841]: time="2025-11-08T00:27:27.384100923Z" level=info msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" returns successfully" Nov 8 00:27:27.393989 containerd[1841]: time="2025-11-08T00:27:27.392788410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-49g92,Uid:55c9436a-adbc-4a13-bbea-b53d5615fa79,Namespace:calico-system,Attempt:1,}" Nov 8 00:27:27.397427 systemd[1]: run-netns-cni\x2de18aab59\x2d4f7c\x2d77da\x2d86ed\x2d7b82ed78b406.mount: Deactivated successfully. Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.303 [INFO][4877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.303 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" iface="eth0" netns="/var/run/netns/cni-f66d6c76-eb6a-f530-83d3-e0557ba6da28" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.305 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" iface="eth0" netns="/var/run/netns/cni-f66d6c76-eb6a-f530-83d3-e0557ba6da28" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.306 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" iface="eth0" netns="/var/run/netns/cni-f66d6c76-eb6a-f530-83d3-e0557ba6da28" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.306 [INFO][4877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.306 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.377 [INFO][4917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.377 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.377 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.391 [WARNING][4917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.391 [INFO][4917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.393 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.406686 containerd[1841]: 2025-11-08 00:27:27.398 [INFO][4877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:27.411392 containerd[1841]: time="2025-11-08T00:27:27.411144494Z" level=info msg="TearDown network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" successfully" Nov 8 00:27:27.411392 containerd[1841]: time="2025-11-08T00:27:27.411179795Z" level=info msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" returns successfully" Nov 8 00:27:27.414676 containerd[1841]: time="2025-11-08T00:27:27.413247315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlmrh,Uid:a34a1c39-70fa-4702-b13c-76dd4159174f,Namespace:kube-system,Attempt:1,}" Nov 8 00:27:27.413857 systemd[1]: run-netns-cni\x2df66d6c76\x2deb6a\x2df530\x2d83d3\x2de0557ba6da28.mount: Deactivated successfully. Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.300 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.303 [INFO][4876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" iface="eth0" netns="/var/run/netns/cni-f67d53bd-a8d5-0ac6-5c9c-cd24870635d3" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.304 [INFO][4876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" iface="eth0" netns="/var/run/netns/cni-f67d53bd-a8d5-0ac6-5c9c-cd24870635d3" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.305 [INFO][4876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" iface="eth0" netns="/var/run/netns/cni-f67d53bd-a8d5-0ac6-5c9c-cd24870635d3" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.305 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.305 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.378 [INFO][4915] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.378 [INFO][4915] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.398 [INFO][4915] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.424 [WARNING][4915] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.424 [INFO][4915] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.426 [INFO][4915] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.430688 containerd[1841]: 2025-11-08 00:27:27.428 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:27.436105 containerd[1841]: time="2025-11-08T00:27:27.431237396Z" level=info msg="TearDown network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" successfully" Nov 8 00:27:27.436105 containerd[1841]: time="2025-11-08T00:27:27.431271396Z" level=info msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" returns successfully" Nov 8 00:27:27.436105 containerd[1841]: time="2025-11-08T00:27:27.432733211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-hkm9n,Uid:22022de4-7569-4e54-9627-bc50a2dfeb17,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:27:27.626149 systemd-networkd[1393]: cali04c7433d336: Link UP Nov 8 00:27:27.627443 systemd-networkd[1393]: cali04c7433d336: Gained carrier Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.476 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0 calico-apiserver-74c54bbcd- calico-apiserver 2bc4797e-b4d5-4e92-8a88-33c63b1aa854 928 0 2025-11-08 00:26:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c54bbcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 calico-apiserver-74c54bbcd-68mbw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali04c7433d336 [] [] }} ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.476 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.547 [INFO][4955] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" HandleID="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.547 [INFO][4955] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" HandleID="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-75d3e74165", "pod":"calico-apiserver-74c54bbcd-68mbw", "timestamp":"2025-11-08 00:27:27.547116557 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.547 [INFO][4955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.547 [INFO][4955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.547 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.561 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.566 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.572 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.576 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.579 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.580 [INFO][4955] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.582 [INFO][4955] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.590 [INFO][4955] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4955] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.194/26] block=192.168.109.192/26 handle="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.194/26] handle="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.656608 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4955] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.194/26] IPv6=[] ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" HandleID="k8s-pod-network.d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.610 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc4797e-b4d5-4e92-8a88-33c63b1aa854", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"calico-apiserver-74c54bbcd-68mbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04c7433d336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.610 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.194/32] ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.611 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04c7433d336 ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.628 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.629 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc4797e-b4d5-4e92-8a88-33c63b1aa854", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f", Pod:"calico-apiserver-74c54bbcd-68mbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04c7433d336", MAC:"4e:cf:bf:9f:61:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.658346 containerd[1841]: 2025-11-08 00:27:27.653 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-68mbw" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:27.728968 containerd[1841]: time="2025-11-08T00:27:27.725161542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:27.728968 containerd[1841]: time="2025-11-08T00:27:27.728770278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:27.728968 containerd[1841]: time="2025-11-08T00:27:27.728804178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:27.728968 containerd[1841]: time="2025-11-08T00:27:27.728904379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:27.738621 systemd-networkd[1393]: cali7411957b3cb: Link UP Nov 8 00:27:27.739772 systemd-networkd[1393]: cali7411957b3cb: Gained carrier Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.525 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0 goldmane-666569f655- calico-system 55c9436a-adbc-4a13-bbea-b53d5615fa79 929 0 2025-11-08 00:27:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 goldmane-666569f655-49g92 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7411957b3cb [] [] }} ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.528 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.602 [INFO][4980] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" HandleID="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4980] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" HandleID="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fdf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"goldmane-666569f655-49g92", "timestamp":"2025-11-08 00:27:27.602485212 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.603 [INFO][4980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.604 [INFO][4980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.604 [INFO][4980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.662 [INFO][4980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.673 [INFO][4980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.685 [INFO][4980] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.689 [INFO][4980] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.691 [INFO][4980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.691 [INFO][4980] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.693 [INFO][4980] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1 Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.703 [INFO][4980] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.717 [INFO][4980] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.195/26] block=192.168.109.192/26 handle="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.717 [INFO][4980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.195/26] handle="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.717 [INFO][4980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.782460 containerd[1841]: 2025-11-08 00:27:27.717 [INFO][4980] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.195/26] IPv6=[] ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" HandleID="k8s-pod-network.b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.720 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55c9436a-adbc-4a13-bbea-b53d5615fa79", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"goldmane-666569f655-49g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7411957b3cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.720 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.195/32] ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.720 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7411957b3cb ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.747 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.753 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55c9436a-adbc-4a13-bbea-b53d5615fa79", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1", Pod:"goldmane-666569f655-49g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7411957b3cb", MAC:"0e:4f:16:20:37:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.785085 containerd[1841]: 2025-11-08 00:27:27.778 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1" Namespace="calico-system" Pod="goldmane-666569f655-49g92" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:27.863523 systemd-networkd[1393]: cali6689d4e7cb0: Link UP Nov 8 00:27:27.865394 systemd-networkd[1393]: cali6689d4e7cb0: Gained carrier Nov 8 00:27:27.865681 containerd[1841]: time="2025-11-08T00:27:27.865219046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:27.865681 containerd[1841]: time="2025-11-08T00:27:27.865284646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:27.865681 containerd[1841]: time="2025-11-08T00:27:27.865309946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:27.869547 containerd[1841]: time="2025-11-08T00:27:27.869452388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.590 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0 calico-apiserver-74c54bbcd- calico-apiserver 22022de4-7569-4e54-9627-bc50a2dfeb17 930 0 2025-11-08 00:26:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74c54bbcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 calico-apiserver-74c54bbcd-hkm9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6689d4e7cb0 [] [] }} ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.591 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.677 [INFO][4993] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" HandleID="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.677 [INFO][4993] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" HandleID="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-75d3e74165", "pod":"calico-apiserver-74c54bbcd-hkm9n", "timestamp":"2025-11-08 00:27:27.677480064 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.677 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.721 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.722 [INFO][4993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.766 [INFO][4993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.780 [INFO][4993] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.791 [INFO][4993] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.793 [INFO][4993] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.797 [INFO][4993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.797 [INFO][4993] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.798 [INFO][4993] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.810 [INFO][4993] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.829 [INFO][4993] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.196/26] block=192.168.109.192/26 handle="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.829 [INFO][4993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.196/26] handle="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.829 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:27.906790 containerd[1841]: 2025-11-08 00:27:27.829 [INFO][4993] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.196/26] IPv6=[] ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" HandleID="k8s-pod-network.edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.844 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"22022de4-7569-4e54-9627-bc50a2dfeb17", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"calico-apiserver-74c54bbcd-hkm9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6689d4e7cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.845 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.196/32] ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.845 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6689d4e7cb0 ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.869 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.871 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"22022de4-7569-4e54-9627-bc50a2dfeb17", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f", Pod:"calico-apiserver-74c54bbcd-hkm9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6689d4e7cb0", MAC:"52:60:9b:b0:74:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:27.908182 containerd[1841]: 2025-11-08 00:27:27.900 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f" Namespace="calico-apiserver" Pod="calico-apiserver-74c54bbcd-hkm9n" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:27.929467 containerd[1841]: time="2025-11-08T00:27:27.928023575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-68mbw,Uid:2bc4797e-b4d5-4e92-8a88-33c63b1aa854,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f\"" Nov 8 00:27:27.937252 containerd[1841]: time="2025-11-08T00:27:27.936968865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:27.982196 systemd-networkd[1393]: cali94cedcaa158: Link UP Nov 8 00:27:27.982452 systemd-networkd[1393]: cali94cedcaa158: Gained carrier Nov 8 00:27:28.009743 containerd[1841]: time="2025-11-08T00:27:28.007370670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:28.010591 containerd[1841]: time="2025-11-08T00:27:28.008277979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:28.010591 containerd[1841]: time="2025-11-08T00:27:28.009897496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.013799 containerd[1841]: time="2025-11-08T00:27:28.012229419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.650 [INFO][4961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0 coredns-668d6bf9bc- kube-system a34a1c39-70fa-4702-b13c-76dd4159174f 931 0 2025-11-08 00:26:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 coredns-668d6bf9bc-dlmrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali94cedcaa158 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.654 [INFO][4961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.771 [INFO][5009] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" HandleID="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.771 [INFO][5009] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" HandleID="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e620), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"coredns-668d6bf9bc-dlmrh", "timestamp":"2025-11-08 00:27:27.771244704 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.771 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.832 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.832 [INFO][5009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.875 [INFO][5009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.904 [INFO][5009] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.913 [INFO][5009] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.916 [INFO][5009] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.919 [INFO][5009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.919 [INFO][5009] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.923 [INFO][5009] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30 Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.949 [INFO][5009] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.962 [INFO][5009] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.197/26] block=192.168.109.192/26 handle="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.962 [INFO][5009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.197/26] handle="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.962 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:28.019946 containerd[1841]: 2025-11-08 00:27:27.962 [INFO][5009] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.197/26] IPv6=[] ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" HandleID="k8s-pod-network.f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:27.970 [INFO][4961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a34a1c39-70fa-4702-b13c-76dd4159174f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"coredns-668d6bf9bc-dlmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94cedcaa158", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:27.971 [INFO][4961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.197/32] ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:27.971 [INFO][4961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94cedcaa158 ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:27.983 [INFO][4961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:27.984 [INFO][4961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a34a1c39-70fa-4702-b13c-76dd4159174f", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30", Pod:"coredns-668d6bf9bc-dlmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94cedcaa158", MAC:"62:40:c4:a0:bf:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:28.020838 containerd[1841]: 2025-11-08 00:27:28.009 [INFO][4961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30" Namespace="kube-system" Pod="coredns-668d6bf9bc-dlmrh" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:28.028293 containerd[1841]: time="2025-11-08T00:27:28.028247880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-49g92,Uid:55c9436a-adbc-4a13-bbea-b53d5615fa79,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1\"" Nov 8 00:27:28.056374 containerd[1841]: time="2025-11-08T00:27:28.056130559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:28.056374 containerd[1841]: time="2025-11-08T00:27:28.056183060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:28.056374 containerd[1841]: time="2025-11-08T00:27:28.056197860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.056374 containerd[1841]: time="2025-11-08T00:27:28.056282361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.152615 containerd[1841]: time="2025-11-08T00:27:28.151790018Z" level=info msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" Nov 8 00:27:28.159188 containerd[1841]: time="2025-11-08T00:27:28.159026990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74c54bbcd-hkm9n,Uid:22022de4-7569-4e54-9627-bc50a2dfeb17,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f\"" Nov 8 00:27:28.163734 containerd[1841]: time="2025-11-08T00:27:28.163452335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dlmrh,Uid:a34a1c39-70fa-4702-b13c-76dd4159174f,Namespace:kube-system,Attempt:1,} returns sandbox id \"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30\"" Nov 8 00:27:28.173381 containerd[1841]: time="2025-11-08T00:27:28.173127832Z" level=info msg="CreateContainer within sandbox \"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:27:28.195308 containerd[1841]: time="2025-11-08T00:27:28.192083422Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:28.207839 containerd[1841]: time="2025-11-08T00:27:28.207580277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:28.208301 containerd[1841]: time="2025-11-08T00:27:28.207648378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:28.210671 kubelet[3381]: E1108 00:27:28.210629 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:28.211071 kubelet[3381]: E1108 00:27:28.210676 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:28.211071 kubelet[3381]: E1108 00:27:28.210933 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:28.212962 kubelet[3381]: E1108 00:27:28.212060 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:28.213060 containerd[1841]: time="2025-11-08T00:27:28.211849120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:28.223479 containerd[1841]: time="2025-11-08T00:27:28.223059332Z" level=info msg="CreateContainer within sandbox \"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1edde8d69115fff6650bb2078e051d4128b82d40db69cef5f50f385ce85cb482\"" Nov 8 00:27:28.224758 containerd[1841]: time="2025-11-08T00:27:28.224100943Z" level=info msg="StartContainer for \"1edde8d69115fff6650bb2078e051d4128b82d40db69cef5f50f385ce85cb482\"" Nov 8 00:27:28.366393 containerd[1841]: time="2025-11-08T00:27:28.364331248Z" level=info msg="StartContainer for \"1edde8d69115fff6650bb2078e051d4128b82d40db69cef5f50f385ce85cb482\" returns successfully" Nov 8 00:27:28.365252 systemd[1]: run-netns-cni\x2df67d53bd\x2da8d5\x2d0ac6\x2d5c9c\x2dcd24870635d3.mount: Deactivated successfully. Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.272 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.272 [INFO][5222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" iface="eth0" netns="/var/run/netns/cni-82d1f458-2225-72e7-e519-79e838c35f07" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.275 [INFO][5222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" iface="eth0" netns="/var/run/netns/cni-82d1f458-2225-72e7-e519-79e838c35f07" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.275 [INFO][5222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" iface="eth0" netns="/var/run/netns/cni-82d1f458-2225-72e7-e519-79e838c35f07" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.275 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.275 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.359 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.362 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.362 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.379 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.379 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.381 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:28.387638 containerd[1841]: 2025-11-08 00:27:28.384 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:28.390584 containerd[1841]: time="2025-11-08T00:27:28.388452290Z" level=info msg="TearDown network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" successfully" Nov 8 00:27:28.390584 containerd[1841]: time="2025-11-08T00:27:28.388493491Z" level=info msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" returns successfully" Nov 8 00:27:28.390584 containerd[1841]: time="2025-11-08T00:27:28.389815104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nvxf,Uid:dc33bba6-34c3-4255-9ae9-b94ef472a17a,Namespace:kube-system,Attempt:1,}" Nov 8 00:27:28.400256 systemd[1]: run-netns-cni\x2d82d1f458\x2d2225\x2d72e7\x2de519\x2d79e838c35f07.mount: Deactivated successfully. Nov 8 00:27:28.422955 kubelet[3381]: E1108 00:27:28.422915 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:28.487729 kubelet[3381]: I1108 00:27:28.487587 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dlmrh" podStartSLOduration=54.487566484 podStartE2EDuration="54.487566484s" podCreationTimestamp="2025-11-08 00:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:27:28.456210069 +0000 UTC m=+58.972454013" watchObservedRunningTime="2025-11-08 00:27:28.487566484 +0000 UTC m=+59.003810328" Nov 8 00:27:28.495563 containerd[1841]: time="2025-11-08T00:27:28.494383252Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:28.498509 containerd[1841]: time="2025-11-08T00:27:28.498269191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:28.498635 containerd[1841]: time="2025-11-08T00:27:28.498446793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:28.498937 kubelet[3381]: E1108 00:27:28.498898 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:28.499029 kubelet[3381]: E1108 00:27:28.498946 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:28.499270 kubelet[3381]: E1108 00:27:28.499221 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:28.499926 containerd[1841]: time="2025-11-08T00:27:28.499893507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:28.500833 kubelet[3381]: E1108 00:27:28.500802 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:28.632617 systemd-networkd[1393]: cali6c57eb55ad0: Link UP Nov 8 00:27:28.635621 systemd-networkd[1393]: cali6c57eb55ad0: Gained carrier Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.542 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0 coredns-668d6bf9bc- kube-system dc33bba6-34c3-4255-9ae9-b94ef472a17a 954 0 2025-11-08 00:26:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 coredns-668d6bf9bc-5nvxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c57eb55ad0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.542 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.588 [INFO][5285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" HandleID="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.588 [INFO][5285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" HandleID="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"coredns-668d6bf9bc-5nvxf", "timestamp":"2025-11-08 00:27:28.588409994 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.588 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.588 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.588 [INFO][5285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.595 [INFO][5285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.598 [INFO][5285] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.602 [INFO][5285] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.603 [INFO][5285] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.605 [INFO][5285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.605 [INFO][5285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.606 [INFO][5285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.612 [INFO][5285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.621 [INFO][5285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.198/26] block=192.168.109.192/26 handle="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.621 [INFO][5285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.198/26] handle="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.621 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:28.652650 containerd[1841]: 2025-11-08 00:27:28.621 [INFO][5285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.198/26] IPv6=[] ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" HandleID="k8s-pod-network.c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.623 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc33bba6-34c3-4255-9ae9-b94ef472a17a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"coredns-668d6bf9bc-5nvxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c57eb55ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.624 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.198/32] ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.624 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c57eb55ad0 ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.626 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.626 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc33bba6-34c3-4255-9ae9-b94ef472a17a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b", Pod:"coredns-668d6bf9bc-5nvxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c57eb55ad0", MAC:"de:5b:52:37:86:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:28.653247 containerd[1841]: 2025-11-08 00:27:28.644 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nvxf" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:28.679450 containerd[1841]: time="2025-11-08T00:27:28.679223505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:28.679450 containerd[1841]: time="2025-11-08T00:27:28.679297205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:28.679450 containerd[1841]: time="2025-11-08T00:27:28.679326006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.679450 containerd[1841]: time="2025-11-08T00:27:28.679421007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:28.698760 systemd-networkd[1393]: cali04c7433d336: Gained IPv6LL Nov 8 00:27:28.748178 containerd[1841]: time="2025-11-08T00:27:28.744524259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nvxf,Uid:dc33bba6-34c3-4255-9ae9-b94ef472a17a,Namespace:kube-system,Attempt:1,} returns sandbox id \"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b\"" Nov 8 00:27:28.748437 containerd[1841]: time="2025-11-08T00:27:28.748405198Z" level=info msg="CreateContainer within sandbox \"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:27:28.748839 containerd[1841]: time="2025-11-08T00:27:28.748811702Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:28.751913 containerd[1841]: time="2025-11-08T00:27:28.751848833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:28.752345 containerd[1841]: time="2025-11-08T00:27:28.751918633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:28.752399 kubelet[3381]: E1108 00:27:28.752022 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:28.752399 kubelet[3381]: E1108 00:27:28.752064 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:28.752399 kubelet[3381]: E1108 00:27:28.752197 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgfx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:28.753458 kubelet[3381]: E1108 00:27:28.753402 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:28.781322 containerd[1841]: time="2025-11-08T00:27:28.781184527Z" level=info msg="CreateContainer within sandbox \"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68654184f38302d9e67ab279167522da9fd05f2547760acb342493a0b57ebe29\"" Nov 8 00:27:28.783623 containerd[1841]: time="2025-11-08T00:27:28.783572551Z" level=info msg="StartContainer for \"68654184f38302d9e67ab279167522da9fd05f2547760acb342493a0b57ebe29\"" Nov 8 00:27:28.865560 containerd[1841]: time="2025-11-08T00:27:28.865012867Z" level=info msg="StartContainer for \"68654184f38302d9e67ab279167522da9fd05f2547760acb342493a0b57ebe29\" returns successfully" Nov 8 00:27:29.083069 systemd-networkd[1393]: cali6689d4e7cb0: Gained IPv6LL Nov 8 00:27:29.429628 kubelet[3381]: E1108 00:27:29.429459 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:29.430996 kubelet[3381]: E1108 00:27:29.430371 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:29.431103 kubelet[3381]: E1108 00:27:29.431029 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:29.475970 kubelet[3381]: I1108 00:27:29.475716 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5nvxf" podStartSLOduration=55.475694388 podStartE2EDuration="55.475694388s" podCreationTimestamp="2025-11-08 00:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:27:29.471662948 +0000 UTC m=+59.987906892" watchObservedRunningTime="2025-11-08 00:27:29.475694388 +0000 UTC m=+59.991938232" Nov 8 00:27:29.658834 systemd-networkd[1393]: cali94cedcaa158: Gained IPv6LL Nov 8 00:27:29.787259 systemd-networkd[1393]: cali7411957b3cb: Gained IPv6LL Nov 8 00:27:30.128617 containerd[1841]: time="2025-11-08T00:27:30.128460531Z" level=info msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" Nov 8 00:27:30.170963 containerd[1841]: time="2025-11-08T00:27:30.170873256Z" level=info msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" Nov 8 00:27:30.174661 containerd[1841]: time="2025-11-08T00:27:30.172281870Z" level=info msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.216 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55c9436a-adbc-4a13-bbea-b53d5615fa79", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1", Pod:"goldmane-666569f655-49g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7411957b3cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.218 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.220 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" iface="eth0" netns="" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.220 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.220 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.364 [INFO][5434] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.364 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.365 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.379 [WARNING][5434] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.379 [INFO][5434] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.382 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:30.391122 containerd[1841]: 2025-11-08 00:27:30.389 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.391122 containerd[1841]: time="2025-11-08T00:27:30.390999762Z" level=info msg="TearDown network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" successfully" Nov 8 00:27:30.397963 containerd[1841]: time="2025-11-08T00:27:30.391029463Z" level=info msg="StopPodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" returns successfully" Nov 8 00:27:30.399966 containerd[1841]: time="2025-11-08T00:27:30.399145544Z" level=info msg="RemovePodSandbox for \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" Nov 8 00:27:30.399966 containerd[1841]: time="2025-11-08T00:27:30.399193045Z" level=info msg="Forcibly stopping sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\"" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.362 [INFO][5426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.363 [INFO][5426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" iface="eth0" netns="/var/run/netns/cni-ae32e479-f20b-55ad-b0f5-0e2924496e00" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.364 [INFO][5426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" iface="eth0" netns="/var/run/netns/cni-ae32e479-f20b-55ad-b0f5-0e2924496e00" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.367 [INFO][5426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" iface="eth0" netns="/var/run/netns/cni-ae32e479-f20b-55ad-b0f5-0e2924496e00" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.367 [INFO][5426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.367 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.428 [INFO][5446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.429 [INFO][5446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.429 [INFO][5446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.448 [WARNING][5446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.448 [INFO][5446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.450 [INFO][5446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:30.460616 containerd[1841]: 2025-11-08 00:27:30.452 [INFO][5426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:27:30.467287 containerd[1841]: time="2025-11-08T00:27:30.467232827Z" level=info msg="TearDown network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" successfully" Nov 8 00:27:30.468164 containerd[1841]: time="2025-11-08T00:27:30.468133336Z" level=info msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" returns successfully" Nov 8 00:27:30.476574 containerd[1841]: time="2025-11-08T00:27:30.474651301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c9ff5fff-tvhdb,Uid:c14ad17b-6d81-4bc3-936c-20b5e88e9ac4,Namespace:calico-system,Attempt:1,}" Nov 8 00:27:30.476466 systemd[1]: run-netns-cni\x2dae32e479\x2df20b\x2d55ad\x2db0f5\x2d0e2924496e00.mount: Deactivated successfully. Nov 8 00:27:30.492054 systemd-networkd[1393]: cali6c57eb55ad0: Gained IPv6LL Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.402 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.403 [INFO][5425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" iface="eth0" netns="/var/run/netns/cni-b232efe5-57b3-6a0f-8132-53c090d0fc5f" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.403 [INFO][5425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" iface="eth0" netns="/var/run/netns/cni-b232efe5-57b3-6a0f-8132-53c090d0fc5f" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.404 [INFO][5425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" iface="eth0" netns="/var/run/netns/cni-b232efe5-57b3-6a0f-8132-53c090d0fc5f" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.404 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.404 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.524 [INFO][5453] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.524 [INFO][5453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.524 [INFO][5453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.541 [WARNING][5453] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.541 [INFO][5453] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.544 [INFO][5453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:30.558298 containerd[1841]: 2025-11-08 00:27:30.553 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:27:30.564033 containerd[1841]: time="2025-11-08T00:27:30.563936696Z" level=info msg="TearDown network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" successfully" Nov 8 00:27:30.564033 containerd[1841]: time="2025-11-08T00:27:30.563978296Z" level=info msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" returns successfully" Nov 8 00:27:30.566424 systemd[1]: run-netns-cni\x2db232efe5\x2d57b3\x2d6a0f\x2d8132\x2d53c090d0fc5f.mount: Deactivated successfully. Nov 8 00:27:30.570893 containerd[1841]: time="2025-11-08T00:27:30.570673363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wt8ss,Uid:a8e41043-2a66-4025-a79c-fc0f732e85fb,Namespace:calico-system,Attempt:1,}" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.595 [WARNING][5466] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55c9436a-adbc-4a13-bbea-b53d5615fa79", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"b0a16fc8eed6990915cc457fe95609a2dc704f731db4bc1d00aa0810197e0fc1", Pod:"goldmane-666569f655-49g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7411957b3cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.595 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.595 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" iface="eth0" netns="" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.595 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.595 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.664 [INFO][5488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.664 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.664 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.677 [WARNING][5488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.677 [INFO][5488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" HandleID="k8s-pod-network.717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Workload="ci--4081.3.6--n--75d3e74165-k8s-goldmane--666569f655--49g92-eth0" Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.679 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:30.684651 containerd[1841]: 2025-11-08 00:27:30.682 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c" Nov 8 00:27:30.684651 containerd[1841]: time="2025-11-08T00:27:30.684614905Z" level=info msg="TearDown network for sandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" successfully" Nov 8 00:27:30.698790 containerd[1841]: time="2025-11-08T00:27:30.698729547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:30.699678 containerd[1841]: time="2025-11-08T00:27:30.698807648Z" level=info msg="RemovePodSandbox \"717a5502e399c03f549aef7eeee2a01ad06b472fc27ab6ebff276846e9c6cc9c\" returns successfully" Nov 8 00:27:30.700132 containerd[1841]: time="2025-11-08T00:27:30.700105461Z" level=info msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" Nov 8 00:27:30.802317 systemd-networkd[1393]: calif187b22f853: Link UP Nov 8 00:27:30.812134 systemd-networkd[1393]: calif187b22f853: Gained carrier Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.653 [INFO][5477] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0 calico-kube-controllers-76c9ff5fff- calico-system c14ad17b-6d81-4bc3-936c-20b5e88e9ac4 1013 0 2025-11-08 00:27:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76c9ff5fff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 calico-kube-controllers-76c9ff5fff-tvhdb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif187b22f853 [] [] }} ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.653 [INFO][5477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.715 [INFO][5508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" HandleID="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.715 [INFO][5508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" HandleID="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"calico-kube-controllers-76c9ff5fff-tvhdb", "timestamp":"2025-11-08 00:27:30.715070011 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.715 [INFO][5508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.715 [INFO][5508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.715 [INFO][5508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.728 [INFO][5508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.734 [INFO][5508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.744 [INFO][5508] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.747 [INFO][5508] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.752 [INFO][5508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.752 [INFO][5508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.760 [INFO][5508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.765 [INFO][5508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.791 [INFO][5508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.199/26] block=192.168.109.192/26 handle="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.792 [INFO][5508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.199/26] handle="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.792 [INFO][5508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:30.863598 containerd[1841]: 2025-11-08 00:27:30.792 [INFO][5508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.199/26] IPv6=[] ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" HandleID="k8s-pod-network.5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.796 [INFO][5477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0", GenerateName:"calico-kube-controllers-76c9ff5fff-", Namespace:"calico-system", SelfLink:"", UID:"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c9ff5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"calico-kube-controllers-76c9ff5fff-tvhdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif187b22f853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.797 [INFO][5477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.199/32] ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.797 [INFO][5477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif187b22f853 ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.803 [INFO][5477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.808 [INFO][5477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0", GenerateName:"calico-kube-controllers-76c9ff5fff-", Namespace:"calico-system", SelfLink:"", UID:"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c9ff5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc", Pod:"calico-kube-controllers-76c9ff5fff-tvhdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif187b22f853", MAC:"12:ec:cc:79:d8:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:30.864468 containerd[1841]: 2025-11-08 00:27:30.848 [INFO][5477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc" Namespace="calico-system" Pod="calico-kube-controllers-76c9ff5fff-tvhdb" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:27:30.989086 systemd-networkd[1393]: cali4d07fc172d6: Link UP Nov 8 00:27:30.995178 systemd-networkd[1393]: cali4d07fc172d6: Gained carrier Nov 8 00:27:30.997040 containerd[1841]: time="2025-11-08T00:27:30.996740209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:30.999320 containerd[1841]: time="2025-11-08T00:27:30.998751128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:30.999592 containerd[1841]: time="2025-11-08T00:27:30.999467635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:31.000182 containerd[1841]: time="2025-11-08T00:27:31.000126741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.841 [WARNING][5523] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.847 [INFO][5523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.847 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" iface="eth0" netns="" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.847 [INFO][5523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.847 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.982 [INFO][5540] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.983 [INFO][5540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:30.983 [INFO][5540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:31.007 [WARNING][5540] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:31.008 [INFO][5540] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:31.027 [INFO][5540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.038580 containerd[1841]: 2025-11-08 00:27:31.034 [INFO][5523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.039464 containerd[1841]: time="2025-11-08T00:27:31.038710511Z" level=info msg="TearDown network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" successfully" Nov 8 00:27:31.039464 containerd[1841]: time="2025-11-08T00:27:31.039315717Z" level=info msg="StopPodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" returns successfully" Nov 8 00:27:31.040926 containerd[1841]: time="2025-11-08T00:27:31.040893632Z" level=info msg="RemovePodSandbox for \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" Nov 8 00:27:31.041015 containerd[1841]: time="2025-11-08T00:27:31.040941133Z" level=info msg="Forcibly stopping sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\"" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.714 [INFO][5494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0 csi-node-driver- calico-system a8e41043-2a66-4025-a79c-fc0f732e85fb 1014 0 2025-11-08 00:27:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-75d3e74165 csi-node-driver-wt8ss eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d07fc172d6 [] [] }} ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.714 [INFO][5494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.807 [INFO][5530] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" HandleID="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.811 [INFO][5530] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" HandleID="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003159e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-75d3e74165", "pod":"csi-node-driver-wt8ss", "timestamp":"2025-11-08 00:27:30.80782594 +0000 UTC"}, Hostname:"ci-4081.3.6-n-75d3e74165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.811 [INFO][5530] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.811 [INFO][5530] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.811 [INFO][5530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-75d3e74165' Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.837 [INFO][5530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.861 [INFO][5530] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.874 [INFO][5530] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.877 [INFO][5530] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.885 [INFO][5530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.887 [INFO][5530] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.905 [INFO][5530] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94 Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.936 [INFO][5530] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.960 [INFO][5530] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.200/26] block=192.168.109.192/26 handle="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.961 [INFO][5530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.200/26] handle="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" host="ci-4081.3.6-n-75d3e74165" Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.961 [INFO][5530] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.065774 containerd[1841]: 2025-11-08 00:27:30.961 [INFO][5530] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.200/26] IPv6=[] ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" HandleID="k8s-pod-network.28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:30.973 [INFO][5494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e41043-2a66-4025-a79c-fc0f732e85fb", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"", Pod:"csi-node-driver-wt8ss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d07fc172d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:30.976 [INFO][5494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.200/32] ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:30.977 [INFO][5494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d07fc172d6 ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:31.003 [INFO][5494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:31.013 [INFO][5494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e41043-2a66-4025-a79c-fc0f732e85fb", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94", Pod:"csi-node-driver-wt8ss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d07fc172d6", MAC:"32:05:23:51:0e:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.066696 containerd[1841]: 2025-11-08 00:27:31.051 [INFO][5494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94" Namespace="calico-system" Pod="csi-node-driver-wt8ss" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:27:31.166957 containerd[1841]: time="2025-11-08T00:27:31.166736439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:27:31.166957 containerd[1841]: time="2025-11-08T00:27:31.166842440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:27:31.166957 containerd[1841]: time="2025-11-08T00:27:31.166861340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:31.167901 containerd[1841]: time="2025-11-08T00:27:31.167605148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:27:31.182108 containerd[1841]: time="2025-11-08T00:27:31.181515581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c9ff5fff-tvhdb,Uid:c14ad17b-6d81-4bc3-936c-20b5e88e9ac4,Namespace:calico-system,Attempt:1,} returns sandbox id \"5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc\"" Nov 8 00:27:31.184578 containerd[1841]: time="2025-11-08T00:27:31.184550610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.177 [WARNING][5595] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" WorkloadEndpoint="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.177 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.177 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" iface="eth0" netns="" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.177 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.178 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.218 [INFO][5639] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.218 [INFO][5639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.218 [INFO][5639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.229 [WARNING][5639] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.229 [INFO][5639] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" HandleID="k8s-pod-network.866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Workload="ci--4081.3.6--n--75d3e74165-k8s-whisker--68bf965c6--fvbxd-eth0" Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.233 [INFO][5639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.239491 containerd[1841]: 2025-11-08 00:27:31.237 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd" Nov 8 00:27:31.242603 containerd[1841]: time="2025-11-08T00:27:31.239647238Z" level=info msg="TearDown network for sandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" successfully" Nov 8 00:27:31.253754 containerd[1841]: time="2025-11-08T00:27:31.253720873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wt8ss,Uid:a8e41043-2a66-4025-a79c-fc0f732e85fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94\"" Nov 8 00:27:31.258700 containerd[1841]: time="2025-11-08T00:27:31.258665821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:31.259014 containerd[1841]: time="2025-11-08T00:27:31.258937423Z" level=info msg="RemovePodSandbox \"866aaec746509cb1f623bcff242fb756ddb3037c84378443d6c6e01bbd3011fd\" returns successfully" Nov 8 00:27:31.259675 containerd[1841]: time="2025-11-08T00:27:31.259338027Z" level=info msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.296 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc33bba6-34c3-4255-9ae9-b94ef472a17a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b", Pod:"coredns-668d6bf9bc-5nvxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c57eb55ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.296 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.296 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" iface="eth0" netns="" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.296 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.296 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.317 [INFO][5680] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.318 [INFO][5680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.318 [INFO][5680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.331 [WARNING][5680] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.331 [INFO][5680] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.333 [INFO][5680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.335594 containerd[1841]: 2025-11-08 00:27:31.334 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.336360 containerd[1841]: time="2025-11-08T00:27:31.335643459Z" level=info msg="TearDown network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" successfully" Nov 8 00:27:31.336360 containerd[1841]: time="2025-11-08T00:27:31.335673659Z" level=info msg="StopPodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" returns successfully" Nov 8 00:27:31.336754 containerd[1841]: time="2025-11-08T00:27:31.336717769Z" level=info msg="RemovePodSandbox for \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" Nov 8 00:27:31.336754 containerd[1841]: time="2025-11-08T00:27:31.336751770Z" level=info msg="Forcibly stopping sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\"" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.370 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc33bba6-34c3-4255-9ae9-b94ef472a17a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"c33af81c2360659432e01f8ffd2f73e494e66dd489ec649df537f21037c2471b", Pod:"coredns-668d6bf9bc-5nvxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c57eb55ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.371 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.371 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" iface="eth0" netns="" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.371 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.371 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.393 [INFO][5701] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.393 [INFO][5701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.393 [INFO][5701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.399 [WARNING][5701] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.399 [INFO][5701] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" HandleID="k8s-pod-network.3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--5nvxf-eth0" Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.400 [INFO][5701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.402685 containerd[1841]: 2025-11-08 00:27:31.401 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3" Nov 8 00:27:31.403560 containerd[1841]: time="2025-11-08T00:27:31.402747003Z" level=info msg="TearDown network for sandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" successfully" Nov 8 00:27:31.410361 containerd[1841]: time="2025-11-08T00:27:31.410319375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:31.410551 containerd[1841]: time="2025-11-08T00:27:31.410393676Z" level=info msg="RemovePodSandbox \"3fffc8504651f3d6667bcb4931e24fe89656c39d8ecf65b309070007ebd680c3\" returns successfully" Nov 8 00:27:31.411285 containerd[1841]: time="2025-11-08T00:27:31.411016882Z" level=info msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" Nov 8 00:27:31.436036 containerd[1841]: time="2025-11-08T00:27:31.435467316Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:31.439417 containerd[1841]: time="2025-11-08T00:27:31.439361854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:31.439703 containerd[1841]: time="2025-11-08T00:27:31.439664957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:31.439961 kubelet[3381]: E1108 00:27:31.439924 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:31.440370 kubelet[3381]: E1108 00:27:31.439966 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:31.440432 containerd[1841]: time="2025-11-08T00:27:31.440361563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:31.440800 kubelet[3381]: E1108 00:27:31.440751 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4kx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:31.442062 kubelet[3381]: E1108 00:27:31.441941 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:31.466584 kubelet[3381]: E1108 00:27:31.464479 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.484 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"22022de4-7569-4e54-9627-bc50a2dfeb17", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f", Pod:"calico-apiserver-74c54bbcd-hkm9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6689d4e7cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.485 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.485 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" iface="eth0" netns="" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.485 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.485 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.509 [INFO][5722] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.509 [INFO][5722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.509 [INFO][5722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.515 [WARNING][5722] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.515 [INFO][5722] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.516 [INFO][5722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.519573 containerd[1841]: 2025-11-08 00:27:31.517 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.523675 containerd[1841]: time="2025-11-08T00:27:31.519661824Z" level=info msg="TearDown network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" successfully" Nov 8 00:27:31.523675 containerd[1841]: time="2025-11-08T00:27:31.520575933Z" level=info msg="StopPodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" returns successfully" Nov 8 00:27:31.525161 containerd[1841]: time="2025-11-08T00:27:31.524166867Z" level=info msg="RemovePodSandbox for \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" Nov 8 00:27:31.525161 containerd[1841]: time="2025-11-08T00:27:31.524219868Z" level=info msg="Forcibly stopping sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\"" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.571 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"22022de4-7569-4e54-9627-bc50a2dfeb17", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"edb35457dcfccd2004937c6fe3866bd9fa4f477ebb735d6711a7a77f984b6a5f", Pod:"calico-apiserver-74c54bbcd-hkm9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6689d4e7cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.571 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.571 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" iface="eth0" netns="" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.572 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.572 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.594 [INFO][5743] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.594 [INFO][5743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.594 [INFO][5743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.599 [WARNING][5743] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.599 [INFO][5743] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" HandleID="k8s-pod-network.c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--hkm9n-eth0" Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.600 [INFO][5743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.603297 containerd[1841]: 2025-11-08 00:27:31.602 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4" Nov 8 00:27:31.603971 containerd[1841]: time="2025-11-08T00:27:31.603332326Z" level=info msg="TearDown network for sandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" successfully" Nov 8 00:27:31.609283 containerd[1841]: time="2025-11-08T00:27:31.609238083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:31.609399 containerd[1841]: time="2025-11-08T00:27:31.609306484Z" level=info msg="RemovePodSandbox \"c043548d558dd432f8e3e27765213b6a3f1f2daa21dc0bf41805528a86f78ff4\" returns successfully" Nov 8 00:27:31.609992 containerd[1841]: time="2025-11-08T00:27:31.609963490Z" level=info msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.645 [WARNING][5757] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a34a1c39-70fa-4702-b13c-76dd4159174f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30", Pod:"coredns-668d6bf9bc-dlmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94cedcaa158", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.645 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.645 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" iface="eth0" netns="" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.645 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.645 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.667 [INFO][5764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.667 [INFO][5764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.667 [INFO][5764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.674 [WARNING][5764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.674 [INFO][5764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.675 [INFO][5764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.677474 containerd[1841]: 2025-11-08 00:27:31.676 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.678415 containerd[1841]: time="2025-11-08T00:27:31.677509938Z" level=info msg="TearDown network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" successfully" Nov 8 00:27:31.678415 containerd[1841]: time="2025-11-08T00:27:31.677569938Z" level=info msg="StopPodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" returns successfully" Nov 8 00:27:31.678415 containerd[1841]: time="2025-11-08T00:27:31.678131044Z" level=info msg="RemovePodSandbox for \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" Nov 8 00:27:31.678415 containerd[1841]: time="2025-11-08T00:27:31.678172844Z" level=info msg="Forcibly stopping sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\"" Nov 8 00:27:31.700673 containerd[1841]: time="2025-11-08T00:27:31.700588059Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:31.703723 containerd[1841]: time="2025-11-08T00:27:31.703645388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:31.703855 containerd[1841]: time="2025-11-08T00:27:31.703680989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:31.704574 kubelet[3381]: E1108 00:27:31.704017 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:31.704574 kubelet[3381]: E1108 00:27:31.704095 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:31.704704 kubelet[3381]: E1108 00:27:31.704443 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:31.708123 containerd[1841]: time="2025-11-08T00:27:31.708090331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.718 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a34a1c39-70fa-4702-b13c-76dd4159174f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"f93169c2c2eee2c1dfa8a2f61f2e1e918dfffd6c6314136802bce2404fd5ae30", Pod:"coredns-668d6bf9bc-dlmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94cedcaa158", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.718 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.718 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" iface="eth0" netns="" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.718 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.718 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.748 [INFO][5786] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.749 [INFO][5786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.749 [INFO][5786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.756 [WARNING][5786] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.756 [INFO][5786] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" HandleID="k8s-pod-network.76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Workload="ci--4081.3.6--n--75d3e74165-k8s-coredns--668d6bf9bc--dlmrh-eth0" Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.757 [INFO][5786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.759947 containerd[1841]: 2025-11-08 00:27:31.758 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7" Nov 8 00:27:31.760654 containerd[1841]: time="2025-11-08T00:27:31.760006829Z" level=info msg="TearDown network for sandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" successfully" Nov 8 00:27:31.767105 containerd[1841]: time="2025-11-08T00:27:31.767066597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:31.767231 containerd[1841]: time="2025-11-08T00:27:31.767130697Z" level=info msg="RemovePodSandbox \"76d1fee7fe0b92b8ae7e66592c6939aa488566368eeb0f87a5b5db2dacab2ab7\" returns successfully" Nov 8 00:27:31.767722 containerd[1841]: time="2025-11-08T00:27:31.767684302Z" level=info msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.808 [WARNING][5800] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc4797e-b4d5-4e92-8a88-33c63b1aa854", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f", Pod:"calico-apiserver-74c54bbcd-68mbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04c7433d336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.808 [INFO][5800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.808 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" iface="eth0" netns="" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.808 [INFO][5800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.808 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.838 [INFO][5807] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.838 [INFO][5807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.838 [INFO][5807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.845 [WARNING][5807] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.845 [INFO][5807] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.847 [INFO][5807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.850274 containerd[1841]: 2025-11-08 00:27:31.849 [INFO][5800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.850274 containerd[1841]: time="2025-11-08T00:27:31.850236594Z" level=info msg="TearDown network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" successfully" Nov 8 00:27:31.850274 containerd[1841]: time="2025-11-08T00:27:31.850268494Z" level=info msg="StopPodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" returns successfully" Nov 8 00:27:31.852173 containerd[1841]: time="2025-11-08T00:27:31.850919101Z" level=info msg="RemovePodSandbox for \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" Nov 8 00:27:31.852173 containerd[1841]: time="2025-11-08T00:27:31.850955001Z" level=info msg="Forcibly stopping sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\"" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.888 [WARNING][5821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0", GenerateName:"calico-apiserver-74c54bbcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc4797e-b4d5-4e92-8a88-33c63b1aa854", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74c54bbcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"d1b4cb5425ca01de5158aaf989624d53fbb95f285e9a472f6ce5ab46b560413f", Pod:"calico-apiserver-74c54bbcd-68mbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04c7433d336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.889 [INFO][5821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.889 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" iface="eth0" netns="" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.889 [INFO][5821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.889 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.909 [INFO][5829] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.909 [INFO][5829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.909 [INFO][5829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.914 [WARNING][5829] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.914 [INFO][5829] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" HandleID="k8s-pod-network.705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--apiserver--74c54bbcd--68mbw-eth0" Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.915 [INFO][5829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:27:31.918080 containerd[1841]: 2025-11-08 00:27:31.916 [INFO][5821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6" Nov 8 00:27:31.919008 containerd[1841]: time="2025-11-08T00:27:31.918130745Z" level=info msg="TearDown network for sandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" successfully" Nov 8 00:27:31.925874 containerd[1841]: time="2025-11-08T00:27:31.925833019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:27:31.925982 containerd[1841]: time="2025-11-08T00:27:31.925906620Z" level=info msg="RemovePodSandbox \"705c1f0b53ea13898c9583a4d5c084709af860b65ffa8d3c92c97d6e2a378ef6\" returns successfully" Nov 8 00:27:31.965144 containerd[1841]: time="2025-11-08T00:27:31.965095196Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:31.967446 containerd[1841]: time="2025-11-08T00:27:31.967394218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:31.967632 containerd[1841]: time="2025-11-08T00:27:31.967419118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:31.967697 kubelet[3381]: E1108 00:27:31.967627 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:31.967697 kubelet[3381]: E1108 00:27:31.967680 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:31.967873 kubelet[3381]: E1108 00:27:31.967826 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:31.969325 kubelet[3381]: E1108 00:27:31.969276 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:32.467662 kubelet[3381]: E1108 00:27:32.467590 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:32.469277 kubelet[3381]: E1108 00:27:32.468947 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:32.475329 systemd-networkd[1393]: cali4d07fc172d6: Gained IPv6LL Nov 8 00:27:32.666841 systemd-networkd[1393]: calif187b22f853: Gained IPv6LL Nov 8 00:27:36.158876 containerd[1841]: time="2025-11-08T00:27:36.158831614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:36.411009 containerd[1841]: time="2025-11-08T00:27:36.410844831Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:36.414746 containerd[1841]: time="2025-11-08T00:27:36.414682468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:36.414863 containerd[1841]: time="2025-11-08T00:27:36.414786069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:36.415139 kubelet[3381]: E1108 00:27:36.415079 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:36.416017 kubelet[3381]: E1108 00:27:36.415147 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:36.416017 kubelet[3381]: E1108 00:27:36.415317 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c9b09b9c8814b659b69c4a1e963dea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:36.418438 containerd[1841]: time="2025-11-08T00:27:36.418186702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:36.673164 containerd[1841]: time="2025-11-08T00:27:36.673024146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:36.675471 containerd[1841]: time="2025-11-08T00:27:36.675421869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:36.675600 containerd[1841]: time="2025-11-08T00:27:36.675514270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:36.675767 kubelet[3381]: E1108 00:27:36.675721 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:36.675840 kubelet[3381]: E1108 00:27:36.675780 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:36.675955 kubelet[3381]: E1108 00:27:36.675915 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:36.677456 kubelet[3381]: E1108 00:27:36.677413 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:27:40.152375 containerd[1841]: time="2025-11-08T00:27:40.152083166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:40.440634 containerd[1841]: time="2025-11-08T00:27:40.440491521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:40.442932 containerd[1841]: time="2025-11-08T00:27:40.442882244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:40.443073 containerd[1841]: time="2025-11-08T00:27:40.442968745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:40.443232 kubelet[3381]: E1108 00:27:40.443185 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:40.443658 kubelet[3381]: E1108 00:27:40.443249 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:40.443658 kubelet[3381]: E1108 00:27:40.443408 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:40.444776 kubelet[3381]: E1108 00:27:40.444519 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:41.149935 containerd[1841]: time="2025-11-08T00:27:41.149738097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:41.893323 containerd[1841]: time="2025-11-08T00:27:41.893246900Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:41.895704 containerd[1841]: time="2025-11-08T00:27:41.895652623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:41.895850 containerd[1841]: time="2025-11-08T00:27:41.895782024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:41.896024 kubelet[3381]: E1108 00:27:41.895970 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:41.896494 kubelet[3381]: E1108 00:27:41.896037 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:41.896494 kubelet[3381]: E1108 00:27:41.896402 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:41.897212 containerd[1841]: time="2025-11-08T00:27:41.897143937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:41.897617 kubelet[3381]: E1108 00:27:41.897559 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:42.150554 containerd[1841]: time="2025-11-08T00:27:42.150180955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:42.153099 containerd[1841]: time="2025-11-08T00:27:42.152802080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:42.153099 containerd[1841]: time="2025-11-08T00:27:42.152900181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:42.153504 kubelet[3381]: E1108 00:27:42.153231 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:42.153504 kubelet[3381]: E1108 00:27:42.153327 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:42.153746 kubelet[3381]: E1108 00:27:42.153678 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgfx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:42.155112 kubelet[3381]: E1108 00:27:42.155072 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:45.150903 containerd[1841]: time="2025-11-08T00:27:45.150562819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:45.394478 containerd[1841]: time="2025-11-08T00:27:45.394351748Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:45.397308 containerd[1841]: time="2025-11-08T00:27:45.397267475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:45.397579 containerd[1841]: time="2025-11-08T00:27:45.397353776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:45.397641 kubelet[3381]: E1108 00:27:45.397557 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:45.397641 kubelet[3381]: E1108 00:27:45.397615 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:45.398147 kubelet[3381]: E1108 00:27:45.397766 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:45.399898 containerd[1841]: time="2025-11-08T00:27:45.399871200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:45.638162 containerd[1841]: time="2025-11-08T00:27:45.638103876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:45.641072 containerd[1841]: time="2025-11-08T00:27:45.641014104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:45.641237 containerd[1841]: time="2025-11-08T00:27:45.641123505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:45.641350 kubelet[3381]: E1108 00:27:45.641302 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:45.641471 kubelet[3381]: E1108 00:27:45.641363 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:45.641604 kubelet[3381]: E1108 00:27:45.641525 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:45.643135 kubelet[3381]: E1108 00:27:45.643096 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:27:47.150064 containerd[1841]: time="2025-11-08T00:27:47.149767801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:47.398121 containerd[1841]: time="2025-11-08T00:27:47.398066305Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:47.401185 containerd[1841]: time="2025-11-08T00:27:47.401059425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:47.401544 containerd[1841]: time="2025-11-08T00:27:47.401157826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:47.401743 kubelet[3381]: E1108 00:27:47.401684 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:47.402473 kubelet[3381]: E1108 00:27:47.401759 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:47.402473 kubelet[3381]: E1108 00:27:47.401942 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4kx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:47.403574 kubelet[3381]: E1108 00:27:47.403261 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:49.153817 kubelet[3381]: E1108 00:27:49.152107 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:27:52.152166 kubelet[3381]: E1108 00:27:52.151425 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:27:52.395023 systemd[1]: run-containerd-runc-k8s.io-8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1-runc.zHNzf3.mount: Deactivated successfully. Nov 8 00:27:52.505488 systemd[1]: run-containerd-runc-k8s.io-8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1-runc.N4GDFn.mount: Deactivated successfully. Nov 8 00:27:53.149135 kubelet[3381]: E1108 00:27:53.148976 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:27:54.153246 kubelet[3381]: E1108 00:27:54.153196 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:27:58.157373 kubelet[3381]: E1108 00:27:58.157327 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:27:59.154324 kubelet[3381]: E1108 00:27:59.154271 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:28:00.156264 containerd[1841]: time="2025-11-08T00:28:00.156102991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:28:00.410841 containerd[1841]: time="2025-11-08T00:28:00.410687245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:00.413469 containerd[1841]: time="2025-11-08T00:28:00.413353371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:28:00.413469 containerd[1841]: time="2025-11-08T00:28:00.413403571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:28:00.413686 kubelet[3381]: E1108 00:28:00.413610 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:00.413686 kubelet[3381]: E1108 00:28:00.413671 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:00.414204 kubelet[3381]: E1108 00:28:00.413835 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c9b09b9c8814b659b69c4a1e963dea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:00.417444 containerd[1841]: time="2025-11-08T00:28:00.417230108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:28:00.668868 containerd[1841]: time="2025-11-08T00:28:00.668721432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:00.671579 containerd[1841]: time="2025-11-08T00:28:00.671500859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:28:00.671791 containerd[1841]: time="2025-11-08T00:28:00.671561159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:28:00.671873 kubelet[3381]: E1108 00:28:00.671771 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:00.671873 kubelet[3381]: E1108 00:28:00.671842 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:00.672064 kubelet[3381]: E1108 00:28:00.672004 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:00.673724 kubelet[3381]: E1108 00:28:00.673670 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:28:04.154906 containerd[1841]: time="2025-11-08T00:28:04.154585079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:28:04.407696 containerd[1841]: time="2025-11-08T00:28:04.407557323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:04.411591 containerd[1841]: time="2025-11-08T00:28:04.411528442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:28:04.411699 containerd[1841]: time="2025-11-08T00:28:04.411647443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:04.411939 kubelet[3381]: E1108 00:28:04.411891 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:04.412355 kubelet[3381]: E1108 00:28:04.411956 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:04.412355 kubelet[3381]: E1108 00:28:04.412129 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:04.414556 kubelet[3381]: E1108 00:28:04.413620 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:28:05.152181 containerd[1841]: time="2025-11-08T00:28:05.151673581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:05.395332 containerd[1841]: time="2025-11-08T00:28:05.395284278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:05.398691 containerd[1841]: time="2025-11-08T00:28:05.398338593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:05.398691 containerd[1841]: time="2025-11-08T00:28:05.398456494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:05.400681 kubelet[3381]: E1108 00:28:05.400629 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:05.400771 kubelet[3381]: E1108 00:28:05.400696 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:05.401017 kubelet[3381]: E1108 00:28:05.400953 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgfx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:05.401828 containerd[1841]: time="2025-11-08T00:28:05.401799410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:05.402930 kubelet[3381]: E1108 00:28:05.402819 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:28:05.661488 containerd[1841]: time="2025-11-08T00:28:05.661062285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:05.664253 containerd[1841]: time="2025-11-08T00:28:05.664168100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:05.664415 containerd[1841]: time="2025-11-08T00:28:05.664204800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:05.664611 kubelet[3381]: E1108 00:28:05.664572 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:05.665114 kubelet[3381]: E1108 00:28:05.664624 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:05.665114 kubelet[3381]: E1108 00:28:05.664783 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:05.666338 kubelet[3381]: E1108 00:28:05.666305 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:28:11.151244 kubelet[3381]: E1108 00:28:11.151170 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:28:12.151203 containerd[1841]: time="2025-11-08T00:28:12.151144703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:28:12.393173 containerd[1841]: time="2025-11-08T00:28:12.392990993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:12.395593 containerd[1841]: time="2025-11-08T00:28:12.395403516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:28:12.395593 containerd[1841]: time="2025-11-08T00:28:12.395515818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:28:12.395751 kubelet[3381]: E1108 00:28:12.395703 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:28:12.396164 kubelet[3381]: E1108 00:28:12.395765 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:28:12.396164 kubelet[3381]: E1108 00:28:12.395920 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4kx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:12.397462 kubelet[3381]: E1108 00:28:12.397422 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:28:13.150885 containerd[1841]: time="2025-11-08T00:28:13.150832981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:28:13.399723 containerd[1841]: time="2025-11-08T00:28:13.399486038Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:13.402663 containerd[1841]: time="2025-11-08T00:28:13.402145065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:28:13.402663 containerd[1841]: time="2025-11-08T00:28:13.402248666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:28:13.403225 kubelet[3381]: E1108 00:28:13.403001 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:13.403225 kubelet[3381]: E1108 00:28:13.403056 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:13.404822 kubelet[3381]: E1108 00:28:13.404667 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:13.407207 containerd[1841]: time="2025-11-08T00:28:13.407160614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:28:13.654042 containerd[1841]: time="2025-11-08T00:28:13.653526549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:13.655976 containerd[1841]: time="2025-11-08T00:28:13.655929472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:28:13.656185 containerd[1841]: time="2025-11-08T00:28:13.656039073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:28:13.656265 kubelet[3381]: E1108 00:28:13.656143 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:13.656265 kubelet[3381]: E1108 00:28:13.656200 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:13.656399 kubelet[3381]: E1108 00:28:13.656345 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:13.658441 kubelet[3381]: E1108 00:28:13.658395 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:28:18.151734 kubelet[3381]: E1108 00:28:18.151479 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:28:19.150406 kubelet[3381]: E1108 00:28:19.150356 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:28:21.150974 kubelet[3381]: E1108 00:28:21.150833 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:28:22.514678 systemd[1]: run-containerd-runc-k8s.io-8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1-runc.xS4Hv3.mount: Deactivated successfully. Nov 8 00:28:26.166865 kubelet[3381]: E1108 00:28:26.165015 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:28:26.166865 kubelet[3381]: E1108 00:28:26.165393 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:28:26.166865 kubelet[3381]: E1108 00:28:26.165948 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:28:29.149934 kubelet[3381]: E1108 00:28:29.149882 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:28:31.929460 containerd[1841]: time="2025-11-08T00:28:31.929082726Z" level=info msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:31.994 [WARNING][5942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e41043-2a66-4025-a79c-fc0f732e85fb", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94", Pod:"csi-node-driver-wt8ss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d07fc172d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:31.995 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:31.995 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" iface="eth0" netns="" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:31.995 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:31.995 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.027 [INFO][5950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.027 [INFO][5950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.027 [INFO][5950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.034 [WARNING][5950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.035 [INFO][5950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.036 [INFO][5950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:28:32.039807 containerd[1841]: 2025-11-08 00:28:32.038 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.039807 containerd[1841]: time="2025-11-08T00:28:32.039757647Z" level=info msg="TearDown network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" successfully" Nov 8 00:28:32.041555 containerd[1841]: time="2025-11-08T00:28:32.040584355Z" level=info msg="StopPodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" returns successfully" Nov 8 00:28:32.041555 containerd[1841]: time="2025-11-08T00:28:32.041294262Z" level=info msg="RemovePodSandbox for \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" Nov 8 00:28:32.041555 containerd[1841]: time="2025-11-08T00:28:32.041344462Z" level=info msg="Forcibly stopping sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\"" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.094 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e41043-2a66-4025-a79c-fc0f732e85fb", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"28feffdc7a562b7ef28d8cb83a563b818cbfa3462b3dff7ea189257782ca4c94", Pod:"csi-node-driver-wt8ss", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d07fc172d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.094 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.094 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" iface="eth0" netns="" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.094 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.094 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.122 [INFO][5972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.123 [INFO][5972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.123 [INFO][5972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.130 [WARNING][5972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.131 [INFO][5972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" HandleID="k8s-pod-network.dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Workload="ci--4081.3.6--n--75d3e74165-k8s-csi--node--driver--wt8ss-eth0" Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.132 [INFO][5972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:28:32.135117 containerd[1841]: 2025-11-08 00:28:32.133 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d" Nov 8 00:28:32.135798 containerd[1841]: time="2025-11-08T00:28:32.135167728Z" level=info msg="TearDown network for sandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" successfully" Nov 8 00:28:32.141354 containerd[1841]: time="2025-11-08T00:28:32.141305785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:28:32.141485 containerd[1841]: time="2025-11-08T00:28:32.141387385Z" level=info msg="RemovePodSandbox \"dac41fb9bd0ece5a5482fea653b00acbd5ea63886923f40db1d8647a03e9d32d\" returns successfully" Nov 8 00:28:32.143562 containerd[1841]: time="2025-11-08T00:28:32.142127092Z" level=info msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.207 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0", GenerateName:"calico-kube-controllers-76c9ff5fff-", Namespace:"calico-system", SelfLink:"", UID:"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c9ff5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc", Pod:"calico-kube-controllers-76c9ff5fff-tvhdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif187b22f853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.208 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.208 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" iface="eth0" netns="" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.208 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.208 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.278 [INFO][5994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.278 [INFO][5994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.278 [INFO][5994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.299 [WARNING][5994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.300 [INFO][5994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.314 [INFO][5994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:28:32.323765 containerd[1841]: 2025-11-08 00:28:32.319 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.324430 containerd[1841]: time="2025-11-08T00:28:32.323832269Z" level=info msg="TearDown network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" successfully" Nov 8 00:28:32.324430 containerd[1841]: time="2025-11-08T00:28:32.323878370Z" level=info msg="StopPodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" returns successfully" Nov 8 00:28:32.325831 containerd[1841]: time="2025-11-08T00:28:32.325782087Z" level=info msg="RemovePodSandbox for \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" Nov 8 00:28:32.325831 containerd[1841]: time="2025-11-08T00:28:32.325836188Z" level=info msg="Forcibly stopping sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\"" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.408 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0", GenerateName:"calico-kube-controllers-76c9ff5fff-", Namespace:"calico-system", SelfLink:"", UID:"c14ad17b-6d81-4bc3-936c-20b5e88e9ac4", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 27, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c9ff5fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-75d3e74165", ContainerID:"5bf2c7d407f8f4f3fdbf89e6f49ed9cef3b9b27f3e622652a11929540a4ccdbc", Pod:"calico-kube-controllers-76c9ff5fff-tvhdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif187b22f853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.408 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.408 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" iface="eth0" netns="" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.409 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.409 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.443 [INFO][6016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.443 [INFO][6016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.446 [INFO][6016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.452 [WARNING][6016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.453 [INFO][6016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" HandleID="k8s-pod-network.5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Workload="ci--4081.3.6--n--75d3e74165-k8s-calico--kube--controllers--76c9ff5fff--tvhdb-eth0" Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.455 [INFO][6016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:28:32.458064 containerd[1841]: 2025-11-08 00:28:32.456 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b" Nov 8 00:28:32.458743 containerd[1841]: time="2025-11-08T00:28:32.458109309Z" level=info msg="TearDown network for sandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" successfully" Nov 8 00:28:32.467959 containerd[1841]: time="2025-11-08T00:28:32.467792398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:28:32.467959 containerd[1841]: time="2025-11-08T00:28:32.467862199Z" level=info msg="RemovePodSandbox \"5820a25c76c014dd6cc3ac0ed2e95c8f598ed01459dcd6559a81dc47da3aa15b\" returns successfully" Nov 8 00:28:33.152122 kubelet[3381]: E1108 00:28:33.152075 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:28:34.151341 kubelet[3381]: E1108 00:28:34.151136 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:28:37.152976 kubelet[3381]: E1108 00:28:37.152926 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:28:37.153613 kubelet[3381]: E1108 00:28:37.152393 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:28:40.152558 kubelet[3381]: E1108 00:28:40.152101 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:28:43.150439 kubelet[3381]: E1108 00:28:43.149972 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:28:46.154053 containerd[1841]: time="2025-11-08T00:28:46.153795134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:28:46.417371 containerd[1841]: time="2025-11-08T00:28:46.417228241Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:46.420489 containerd[1841]: time="2025-11-08T00:28:46.420319169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:28:46.420489 containerd[1841]: time="2025-11-08T00:28:46.420417870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:46.420741 kubelet[3381]: E1108 00:28:46.420683 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:46.421259 kubelet[3381]: E1108 00:28:46.420761 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:46.421259 kubelet[3381]: E1108 00:28:46.420945 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-49g92_calico-system(55c9436a-adbc-4a13-bbea-b53d5615fa79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:46.422626 kubelet[3381]: E1108 00:28:46.422583 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:28:47.150502 containerd[1841]: time="2025-11-08T00:28:47.149563633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:47.395883 containerd[1841]: time="2025-11-08T00:28:47.395640981Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:47.399709 containerd[1841]: time="2025-11-08T00:28:47.399648518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:47.399860 containerd[1841]: time="2025-11-08T00:28:47.399684018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:47.399967 kubelet[3381]: E1108 00:28:47.399930 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:47.400050 kubelet[3381]: E1108 00:28:47.399986 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:47.400176 kubelet[3381]: E1108 00:28:47.400133 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgfx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:47.401688 kubelet[3381]: E1108 00:28:47.401591 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:28:49.150673 kubelet[3381]: E1108 00:28:49.150516 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:28:49.154913 containerd[1841]: time="2025-11-08T00:28:49.154481752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:28:49.407243 containerd[1841]: time="2025-11-08T00:28:49.406585056Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:49.409643 containerd[1841]: time="2025-11-08T00:28:49.409448982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:28:49.409643 containerd[1841]: time="2025-11-08T00:28:49.409578983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:28:49.410812 kubelet[3381]: E1108 00:28:49.409961 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:49.410812 kubelet[3381]: E1108 00:28:49.410016 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:49.410812 kubelet[3381]: E1108 00:28:49.410152 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c9b09b9c8814b659b69c4a1e963dea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:49.413197 containerd[1841]: time="2025-11-08T00:28:49.412544310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:28:49.658249 containerd[1841]: time="2025-11-08T00:28:49.657693950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:49.660620 containerd[1841]: time="2025-11-08T00:28:49.660552576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:28:49.660620 containerd[1841]: time="2025-11-08T00:28:49.660574576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:28:49.660885 kubelet[3381]: E1108 00:28:49.660837 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:49.660994 kubelet[3381]: E1108 00:28:49.660907 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:49.661087 kubelet[3381]: E1108 00:28:49.661046 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:49.662774 kubelet[3381]: E1108 00:28:49.662567 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:28:52.529411 systemd[1]: run-containerd-runc-k8s.io-8333c64a5565bce2a2c05fddf3e4c08182d580ee272c3d73a9d3efd58a4420d1-runc.XVa5wE.mount: Deactivated successfully. Nov 8 00:28:54.157229 containerd[1841]: time="2025-11-08T00:28:54.156960050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:28:54.407521 containerd[1841]: time="2025-11-08T00:28:54.407327324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:54.410094 containerd[1841]: time="2025-11-08T00:28:54.409997347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:28:54.410365 containerd[1841]: time="2025-11-08T00:28:54.410040148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:28:54.410510 kubelet[3381]: E1108 00:28:54.410463 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:54.410940 kubelet[3381]: E1108 00:28:54.410529 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:54.410940 kubelet[3381]: E1108 00:28:54.410707 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:54.413264 containerd[1841]: time="2025-11-08T00:28:54.413219275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:28:54.661304 containerd[1841]: time="2025-11-08T00:28:54.661104228Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:54.664487 containerd[1841]: time="2025-11-08T00:28:54.663991053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:28:54.664487 containerd[1841]: time="2025-11-08T00:28:54.664022553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:28:54.665225 kubelet[3381]: E1108 00:28:54.664385 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:54.665225 kubelet[3381]: E1108 00:28:54.664463 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:54.665225 kubelet[3381]: E1108 00:28:54.664687 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wt8ss_calico-system(a8e41043-2a66-4025-a79c-fc0f732e85fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:54.666950 kubelet[3381]: E1108 00:28:54.666687 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:28:57.404230 systemd[1]: Started sshd@7-10.200.8.16:22-10.200.16.10:42892.service - OpenSSH per-connection server daemon (10.200.16.10:42892). Nov 8 00:28:58.044703 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 42892 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:58.045308 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:58.054335 systemd-logind[1801]: New session 10 of user core. Nov 8 00:28:58.060892 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:28:58.155911 containerd[1841]: time="2025-11-08T00:28:58.155571770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:58.156397 kubelet[3381]: E1108 00:28:58.155748 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:28:58.446488 containerd[1841]: time="2025-11-08T00:28:58.445422487Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:58.449226 containerd[1841]: time="2025-11-08T00:28:58.448392713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:58.449226 containerd[1841]: time="2025-11-08T00:28:58.448503114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:58.449403 kubelet[3381]: E1108 00:28:58.448680 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:58.449403 kubelet[3381]: E1108 00:28:58.448735 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:58.449403 kubelet[3381]: E1108 00:28:58.448885 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-68mbw_calico-apiserver(2bc4797e-b4d5-4e92-8a88-33c63b1aa854): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:58.452411 kubelet[3381]: E1108 00:28:58.450943 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:28:58.603741 sshd[6057]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:58.608498 systemd-logind[1801]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:28:58.610076 systemd[1]: sshd@7-10.200.8.16:22-10.200.16.10:42892.service: Deactivated successfully. Nov 8 00:28:58.614282 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:28:58.616346 systemd-logind[1801]: Removed session 10. Nov 8 00:29:00.150920 kubelet[3381]: E1108 00:29:00.150836 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:29:03.155697 containerd[1841]: time="2025-11-08T00:29:03.155216761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:03.156754 kubelet[3381]: E1108 00:29:03.156257 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:29:03.414706 containerd[1841]: time="2025-11-08T00:29:03.414484781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:03.417255 containerd[1841]: time="2025-11-08T00:29:03.417202705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:03.417436 containerd[1841]: time="2025-11-08T00:29:03.417289306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:03.417484 kubelet[3381]: E1108 00:29:03.417438 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:03.417555 kubelet[3381]: E1108 00:29:03.417496 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:03.417716 kubelet[3381]: E1108 00:29:03.417671 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4kx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c9ff5fff-tvhdb_calico-system(c14ad17b-6d81-4bc3-936c-20b5e88e9ac4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:03.419220 kubelet[3381]: E1108 00:29:03.419188 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:29:03.723820 systemd[1]: Started sshd@8-10.200.8.16:22-10.200.16.10:42552.service - OpenSSH per-connection server daemon (10.200.16.10:42552). Nov 8 00:29:04.357076 sshd[6093]: Accepted publickey for core from 10.200.16.10 port 42552 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:04.360825 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:04.367707 systemd-logind[1801]: New session 11 of user core. Nov 8 00:29:04.373924 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:29:04.891822 sshd[6093]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:04.898169 systemd[1]: sshd@8-10.200.8.16:22-10.200.16.10:42552.service: Deactivated successfully. Nov 8 00:29:04.902353 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:29:04.903413 systemd-logind[1801]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:29:04.905574 systemd-logind[1801]: Removed session 11. Nov 8 00:29:06.154246 kubelet[3381]: E1108 00:29:06.154162 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:29:10.005164 systemd[1]: Started sshd@9-10.200.8.16:22-10.200.16.10:48024.service - OpenSSH per-connection server daemon (10.200.16.10:48024). Nov 8 00:29:10.154776 kubelet[3381]: E1108 00:29:10.154730 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:29:10.644064 sshd[6110]: Accepted publickey for core from 10.200.16.10 port 48024 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:10.646639 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:10.652871 systemd-logind[1801]: New session 12 of user core. Nov 8 00:29:10.661862 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:29:11.207023 sshd[6110]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:11.210876 systemd[1]: sshd@9-10.200.8.16:22-10.200.16.10:48024.service: Deactivated successfully. Nov 8 00:29:11.215922 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:29:11.217367 systemd-logind[1801]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:29:11.218285 systemd-logind[1801]: Removed session 12. Nov 8 00:29:11.330896 systemd[1]: Started sshd@10-10.200.8.16:22-10.200.16.10:48036.service - OpenSSH per-connection server daemon (10.200.16.10:48036). Nov 8 00:29:11.959388 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 48036 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:11.962143 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:11.970435 systemd-logind[1801]: New session 13 of user core. Nov 8 00:29:11.976308 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:29:12.568291 sshd[6125]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:12.572786 systemd[1]: sshd@10-10.200.8.16:22-10.200.16.10:48036.service: Deactivated successfully. Nov 8 00:29:12.580104 systemd-logind[1801]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:29:12.580887 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:29:12.585618 systemd-logind[1801]: Removed session 13. Nov 8 00:29:12.674999 systemd[1]: Started sshd@11-10.200.8.16:22-10.200.16.10:48038.service - OpenSSH per-connection server daemon (10.200.16.10:48038). Nov 8 00:29:13.151365 kubelet[3381]: E1108 00:29:13.151215 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:29:13.314565 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 48038 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:13.317514 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:13.328593 systemd-logind[1801]: New session 14 of user core. Nov 8 00:29:13.337045 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:29:13.865784 sshd[6137]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:13.871645 systemd-logind[1801]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:29:13.875316 systemd[1]: sshd@11-10.200.8.16:22-10.200.16.10:48038.service: Deactivated successfully. Nov 8 00:29:13.888728 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:29:13.892807 systemd-logind[1801]: Removed session 14. Nov 8 00:29:15.151619 kubelet[3381]: E1108 00:29:15.151551 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:29:16.150799 kubelet[3381]: E1108 00:29:16.150706 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:29:18.154024 kubelet[3381]: E1108 00:29:18.153614 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:29:18.974827 systemd[1]: Started sshd@12-10.200.8.16:22-10.200.16.10:48040.service - OpenSSH per-connection server daemon (10.200.16.10:48040). Nov 8 00:29:19.623747 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 48040 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:19.626192 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:19.634587 systemd-logind[1801]: New session 15 of user core. Nov 8 00:29:19.641589 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:29:20.162748 kubelet[3381]: E1108 00:29:20.162664 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:29:20.254779 sshd[6151]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:20.263157 systemd-logind[1801]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:29:20.266406 systemd[1]: sshd@12-10.200.8.16:22-10.200.16.10:48040.service: Deactivated successfully. Nov 8 00:29:20.276359 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:29:20.282796 systemd-logind[1801]: Removed session 15. Nov 8 00:29:22.154602 kubelet[3381]: E1108 00:29:22.151567 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:29:25.151579 kubelet[3381]: E1108 00:29:25.151046 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:29:25.362522 systemd[1]: Started sshd@13-10.200.8.16:22-10.200.16.10:52286.service - OpenSSH per-connection server daemon (10.200.16.10:52286). Nov 8 00:29:25.996618 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 52286 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:25.999574 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:26.012850 systemd-logind[1801]: New session 16 of user core. Nov 8 00:29:26.016899 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:29:26.151367 kubelet[3381]: E1108 00:29:26.150270 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:29:26.623805 sshd[6190]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:26.628816 systemd-logind[1801]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:29:26.631371 systemd[1]: sshd@13-10.200.8.16:22-10.200.16.10:52286.service: Deactivated successfully. Nov 8 00:29:26.640624 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:29:26.642334 systemd-logind[1801]: Removed session 16. Nov 8 00:29:29.151028 kubelet[3381]: E1108 00:29:29.150929 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:29:31.152753 kubelet[3381]: E1108 00:29:31.152697 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:29:31.731884 systemd[1]: Started sshd@14-10.200.8.16:22-10.200.16.10:55498.service - OpenSSH per-connection server daemon (10.200.16.10:55498). Nov 8 00:29:32.155551 kubelet[3381]: E1108 00:29:32.155483 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:29:32.364866 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 55498 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:32.367217 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:32.375026 systemd-logind[1801]: New session 17 of user core. Nov 8 00:29:32.380199 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:29:32.943543 sshd[6206]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:32.949122 systemd[1]: sshd@14-10.200.8.16:22-10.200.16.10:55498.service: Deactivated successfully. Nov 8 00:29:32.957660 systemd-logind[1801]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:29:32.958775 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:29:32.961893 systemd-logind[1801]: Removed session 17. Nov 8 00:29:35.152160 kubelet[3381]: E1108 00:29:35.152107 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:29:38.056832 systemd[1]: Started sshd@15-10.200.8.16:22-10.200.16.10:55514.service - OpenSSH per-connection server daemon (10.200.16.10:55514). Nov 8 00:29:38.676969 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 55514 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:38.678455 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:38.682566 systemd-logind[1801]: New session 18 of user core. Nov 8 00:29:38.685150 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:29:39.213832 sshd[6222]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:39.218571 systemd[1]: sshd@15-10.200.8.16:22-10.200.16.10:55514.service: Deactivated successfully. Nov 8 00:29:39.228680 systemd-logind[1801]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:29:39.229871 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:29:39.231887 systemd-logind[1801]: Removed session 18. Nov 8 00:29:39.326839 systemd[1]: Started sshd@16-10.200.8.16:22-10.200.16.10:55516.service - OpenSSH per-connection server daemon (10.200.16.10:55516). Nov 8 00:29:39.983001 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 55516 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:39.984788 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:39.989064 systemd-logind[1801]: New session 19 of user core. Nov 8 00:29:39.994840 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:29:40.150766 kubelet[3381]: E1108 00:29:40.150619 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:29:40.152402 kubelet[3381]: E1108 00:29:40.151915 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:29:40.594664 sshd[6236]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:40.598103 systemd-logind[1801]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:29:40.599890 systemd[1]: sshd@16-10.200.8.16:22-10.200.16.10:55516.service: Deactivated successfully. Nov 8 00:29:40.610518 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:29:40.612370 systemd-logind[1801]: Removed session 19. Nov 8 00:29:40.705233 systemd[1]: Started sshd@17-10.200.8.16:22-10.200.16.10:41284.service - OpenSSH per-connection server daemon (10.200.16.10:41284). Nov 8 00:29:41.348358 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 41284 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:41.349477 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:41.357101 systemd-logind[1801]: New session 20 of user core. Nov 8 00:29:41.362832 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:29:42.646766 sshd[6248]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:42.652236 systemd[1]: sshd@17-10.200.8.16:22-10.200.16.10:41284.service: Deactivated successfully. Nov 8 00:29:42.657697 systemd-logind[1801]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:29:42.658424 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:29:42.660150 systemd-logind[1801]: Removed session 20. Nov 8 00:29:42.768122 systemd[1]: Started sshd@18-10.200.8.16:22-10.200.16.10:41300.service - OpenSSH per-connection server daemon (10.200.16.10:41300). Nov 8 00:29:43.424374 sshd[6267]: Accepted publickey for core from 10.200.16.10 port 41300 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:43.426160 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:43.431728 systemd-logind[1801]: New session 21 of user core. Nov 8 00:29:43.440853 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:29:44.154738 kubelet[3381]: E1108 00:29:44.154674 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:29:44.158906 sshd[6267]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:44.162988 systemd[1]: sshd@18-10.200.8.16:22-10.200.16.10:41300.service: Deactivated successfully. Nov 8 00:29:44.170763 systemd-logind[1801]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:29:44.171505 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:29:44.174114 systemd-logind[1801]: Removed session 21. Nov 8 00:29:44.269823 systemd[1]: Started sshd@19-10.200.8.16:22-10.200.16.10:41316.service - OpenSSH per-connection server daemon (10.200.16.10:41316). Nov 8 00:29:44.925138 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 41316 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:44.927076 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:44.933231 systemd-logind[1801]: New session 22 of user core. Nov 8 00:29:44.939886 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:29:45.471825 sshd[6279]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:45.475814 systemd-logind[1801]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:29:45.477825 systemd[1]: sshd@19-10.200.8.16:22-10.200.16.10:41316.service: Deactivated successfully. Nov 8 00:29:45.484775 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:29:45.488180 systemd-logind[1801]: Removed session 22. Nov 8 00:29:46.150184 kubelet[3381]: E1108 00:29:46.149827 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:29:46.151753 kubelet[3381]: E1108 00:29:46.151676 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:29:48.151997 kubelet[3381]: E1108 00:29:48.151955 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:29:50.586803 systemd[1]: Started sshd@20-10.200.8.16:22-10.200.16.10:32798.service - OpenSSH per-connection server daemon (10.200.16.10:32798). Nov 8 00:29:51.241302 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 32798 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:51.243664 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:51.250423 systemd-logind[1801]: New session 23 of user core. Nov 8 00:29:51.253856 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:29:51.826596 sshd[6295]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:51.830091 systemd[1]: sshd@20-10.200.8.16:22-10.200.16.10:32798.service: Deactivated successfully. Nov 8 00:29:51.841589 systemd-logind[1801]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:29:51.842407 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:29:51.844522 systemd-logind[1801]: Removed session 23. Nov 8 00:29:52.154642 kubelet[3381]: E1108 00:29:52.152591 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:29:55.150955 kubelet[3381]: E1108 00:29:55.150855 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:29:56.940676 systemd[1]: Started sshd@21-10.200.8.16:22-10.200.16.10:32802.service - OpenSSH per-connection server daemon (10.200.16.10:32802). Nov 8 00:29:57.151556 kubelet[3381]: E1108 00:29:57.151479 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:29:57.799382 sshd[6329]: Accepted publickey for core from 10.200.16.10 port 32802 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:29:57.802055 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:57.812248 systemd-logind[1801]: New session 24 of user core. Nov 8 00:29:57.818592 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:29:58.160242 kubelet[3381]: E1108 00:29:58.160082 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:29:58.380337 sshd[6329]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:58.387199 systemd-logind[1801]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:29:58.388860 systemd[1]: sshd@21-10.200.8.16:22-10.200.16.10:32802.service: Deactivated successfully. Nov 8 00:29:58.400564 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:29:58.402595 systemd-logind[1801]: Removed session 24. Nov 8 00:29:59.150385 kubelet[3381]: E1108 00:29:59.150201 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:30:03.152663 kubelet[3381]: E1108 00:30:03.151688 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:30:03.498826 systemd[1]: Started sshd@22-10.200.8.16:22-10.200.16.10:38626.service - OpenSSH per-connection server daemon (10.200.16.10:38626). Nov 8 00:30:04.149794 sshd[6345]: Accepted publickey for core from 10.200.16.10 port 38626 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:30:04.151600 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:04.158197 systemd-logind[1801]: New session 25 of user core. Nov 8 00:30:04.163869 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:30:04.694433 sshd[6345]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:04.698281 systemd-logind[1801]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:30:04.701664 systemd[1]: sshd@22-10.200.8.16:22-10.200.16.10:38626.service: Deactivated successfully. Nov 8 00:30:04.706375 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:30:04.708372 systemd-logind[1801]: Removed session 25. Nov 8 00:30:05.149834 kubelet[3381]: E1108 00:30:05.149355 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-49g92" podUID="55c9436a-adbc-4a13-bbea-b53d5615fa79" Nov 8 00:30:09.815859 systemd[1]: Started sshd@23-10.200.8.16:22-10.200.16.10:38640.service - OpenSSH per-connection server daemon (10.200.16.10:38640). Nov 8 00:30:10.153025 containerd[1841]: time="2025-11-08T00:30:10.152812411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:10.398308 containerd[1841]: time="2025-11-08T00:30:10.398258984Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:10.401297 containerd[1841]: time="2025-11-08T00:30:10.401259711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:10.401415 containerd[1841]: time="2025-11-08T00:30:10.401349012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:10.401586 kubelet[3381]: E1108 00:30:10.401517 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:10.402113 kubelet[3381]: E1108 00:30:10.401601 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:10.402113 kubelet[3381]: E1108 00:30:10.401803 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgfx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74c54bbcd-hkm9n_calico-apiserver(22022de4-7569-4e54-9627-bc50a2dfeb17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:10.403146 kubelet[3381]: E1108 00:30:10.403057 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-hkm9n" podUID="22022de4-7569-4e54-9627-bc50a2dfeb17" Nov 8 00:30:10.473294 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 38640 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:30:10.475091 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:10.480659 systemd-logind[1801]: New session 26 of user core. Nov 8 00:30:10.488786 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:30:11.002705 sshd[6371]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:11.009862 systemd-logind[1801]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:30:11.013238 systemd[1]: sshd@23-10.200.8.16:22-10.200.16.10:38640.service: Deactivated successfully. Nov 8 00:30:11.021849 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:30:11.027300 systemd-logind[1801]: Removed session 26. Nov 8 00:30:11.150742 containerd[1841]: time="2025-11-08T00:30:11.150324944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:30:11.417889 containerd[1841]: time="2025-11-08T00:30:11.417689860Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:11.422456 containerd[1841]: time="2025-11-08T00:30:11.420571685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:30:11.422456 containerd[1841]: time="2025-11-08T00:30:11.420627086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:30:11.422654 kubelet[3381]: E1108 00:30:11.420925 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:11.422654 kubelet[3381]: E1108 00:30:11.420984 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:11.422654 kubelet[3381]: E1108 00:30:11.421107 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3c9b09b9c8814b659b69c4a1e963dea9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:11.428189 containerd[1841]: time="2025-11-08T00:30:11.427977850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:30:11.676517 containerd[1841]: time="2025-11-08T00:30:11.674679786Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:11.677924 containerd[1841]: time="2025-11-08T00:30:11.677797613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:30:11.677924 containerd[1841]: time="2025-11-08T00:30:11.677835813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:11.678120 kubelet[3381]: E1108 00:30:11.678074 3381 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:11.678186 kubelet[3381]: E1108 00:30:11.678140 3381 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:11.678511 kubelet[3381]: E1108 00:30:11.678283 3381 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c9wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795578dc98-zld7l_calico-system(47c77009-fff2-4caa-920d-906bda818400): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:11.679884 kubelet[3381]: E1108 00:30:11.679815 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795578dc98-zld7l" podUID="47c77009-fff2-4caa-920d-906bda818400" Nov 8 00:30:13.150588 kubelet[3381]: E1108 00:30:13.150503 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c9ff5fff-tvhdb" podUID="c14ad17b-6d81-4bc3-936c-20b5e88e9ac4" Nov 8 00:30:14.156615 kubelet[3381]: E1108 00:30:14.156019 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wt8ss" podUID="a8e41043-2a66-4025-a79c-fc0f732e85fb" Nov 8 00:30:16.110427 systemd[1]: Started sshd@24-10.200.8.16:22-10.200.16.10:49762.service - OpenSSH per-connection server daemon (10.200.16.10:49762). Nov 8 00:30:16.743252 sshd[6387]: Accepted publickey for core from 10.200.16.10 port 49762 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:30:16.744855 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:16.749859 systemd-logind[1801]: New session 27 of user core. Nov 8 00:30:16.752843 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:30:17.151425 kubelet[3381]: E1108 00:30:17.151385 3381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74c54bbcd-68mbw" podUID="2bc4797e-b4d5-4e92-8a88-33c63b1aa854" Nov 8 00:30:17.253899 sshd[6387]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:17.257508 systemd-logind[1801]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:30:17.258641 systemd[1]: sshd@24-10.200.8.16:22-10.200.16.10:49762.service: Deactivated successfully. Nov 8 00:30:17.262932 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:30:17.266050 systemd-logind[1801]: Removed session 27.